I shall keep this brief. A few weeks ago, two parcels, one containing some books and the other a new router from my new ISP (the old one having been taken over – so fun and games await on switch over day!!), were due to be delivered on the same day. Both courier companies emailed me to tell me the date. Subsequently, company A emailed me to tell me that the router would be delivered between 9.57 and 10.57 am. Company B told me that my books would be delivered…’before the end of the day’.
I rang up company B and very politely inquired as to whether they could give me a slightly me a slightly more accurate delivery time as we did, after all, live in the digital age, when it’s possible not only to estimate delivery times based on the planned delivery schedule, but also obtain live updates from the delivery driver and was told, equally politely, that things like traffic congestion meant that they could not give me a specific delivery window. The router arrived at 10.15, well within the delivery slot, and the books arrived at 4.30pm. To be fair to company B, I was offered, and accepted, the option of nominating a ‘safe place’ where my books could be left for me.
I’m sure readers can guess which of the two courier companies I would engage with in the future when it comes to sending out parcels. And what might seem a relatively trivial life incident is a great example of how businesses are going to sink or swim in the digital age. Those who can use modern technology combined with good customer service, should survive and thrive; those who get left behind, well, they might not be in business for too long.
Which reminds me, before I go, that the level of unsolicited marketing emails received in the past few weeks across one of my three email accounts has reached epidemic proportions. Many times a day folks promise me that my website (I do not have a website myself) will look amazing and rank No.1 on every search engine known to man and woman. Somebody’s spam filter is not working very well. And GDPR doesn’t seem to be working either!
Never mind. I hope that you enjoy reading the May issue of DW and that at least some of the articles help you understand more about the technologies and ideas available to develop your business into the future.
Zscaler report provides insights into user behaviour and the challenges with mobility and remote access in the face of cloud transformation.
Zscale has published the release of its Digital Transformation Report EMEA 2019, which found that 72% of organisations have a majority of their employees accessing applications and data in the cloud or the data centre on their mobile devices with 29% of companies claiming that number to be more than 75% across the UK, Germany, France, and the Benelux region. This high rate of mobility coincides with the top drivers for digital transformation initiatives, which include enabling greater flexibility for employees (37%) and implementing more efficient processes (38%).
When asked about the biggest obstacle to digital transformation, however, security topped the list across all four regions. Eighty percent (80%) of enterprises have security concerns about the way in which employees remotely access data and applications, with the primary focus on the use of unsecured networks (34%) and unmanaged devices (21%) as well as blanket access to the entire corporate network (20%).
Companies embarking on digital transformation initiatives are beginning to recognise that the traditional way of providing remote access connectivity to their applications residing in the cloud or corporate networks are riddled with security risks. With the extension of the perimeter to the internet, segmentation on application level is needed to strengthen the security posture in the cloud era, when mobile employees, consultants, and third parties require access.
“Digital transformation is a powerful business enabler with many potential benefits—from added flexibility for employees to cost and efficiency savings —and it must be a process involving input from all aspects of the business, not just IT,” said Stan Lowe, Global CISO at Zscaler. “With applications moving to the cloud, and users connecting from everywhere, the perimeter is long gone. It’s therefore time to decouple security from the network and use policies that are enforced anywhere applications reside and everywhere users connect. Ultimately, as applications move to the cloud, security needs to move there too.”
The report also found that digital transformation is predominantly an IT decision, however business decision-makers are increasingly driving this initiative, such as the Chief Information Officer (54%) and Chief Digital Officer (47%). Furthermore, 18% claim their CEOs are pushing for and owning digital transformation. The top reasons for embarking on a digital transformation journey were increased flexibility for employees (37%), a new business strategy to focus on core competencies (36%), improved profit margins (36%) and increased cost savings (35%).
“Companies have to consider the effect that application transformation has on their network performance, bandwidth consumption and the latency added by hub-and-spoke architectures from the outset,” Lowe concluded, “Moving applications to the cloud needs to be considered in-line with new network infrastructure and security requirements. The new imperative is direct-to-Internet access with security policies that protect users, regardless of their location or chosen device.”
More than half of companies surveyed are putting mission-critical apps in the cloud.
For the first time, a majority of companies are putting mission critical apps in the cloud, according to the latest report released by Cloud Foundry Foundation, home to a family of interoperable open source projects for the enterprise, at its North American Cloud Foundry Summit in Philadelphia. The study revealed that companies treat digital transformation as a constant cycle of adaptation rather than a one-time fix. As part of that process, cloud technologies such as Platform-as-a-Service (PaaS), containers and serverless continue to grow at scale, while microservices and AI/ML are next to be integrated into their workflows.
As more companies embrace the reality of digital transformation, they are adapting to the iterative journey that unfolds. Case in point: 74 percent of respondents equate digital transformation to “perpetual shifts and constant adaption of new technology,” compared to 26 percent who view digital transformation as a “one-time change and adoption of new technology.” More than three quarters of IT decision makers believe digital transformation is a reality, and 86 percent of CIOs, CTOs and Line of Business leaders agree.
“The vast majority of companies agree digital transformation is a constant process of incremental change, rather than a one-time initiative, and are realizing their long-term strategy must involve adaptation to a wide range of unforeseen challenges and technological changes,” said Abby Kearns, Executive Director, Cloud Foundry Foundation. “Although companies are starting to see the benefits of advanced cloud technologies, what’s coming next—artificial intelligence, machine learning and blockchain, for example—will continue to prove that the only constant in technology is change.”
Key findings from the report include:
Almost 5,000 knowledge workers and business decision makers share their insight on how RPA is changing business operations and employees’ working lives.
A global automation report that surveyed nearly 5,000 business decision makers and knowledge workers reveals the majority of the latter (83 percent) are comfortable with reskilling in order to work alongside the digital workforce. A further 78 percent of knowledge workers say they’re ready to take on a new job role, according to a new report called, “Automate or Stagnate: The Impact of Intelligent Automation on the Future of Work” from Blue Prism.
This sentiment is contrary to a popularly held belief of the market and business decision makers (70 percent), that employees are afraid of losing their jobs to automation. In fact, only 37 percent of knowledge workers harbor fears about job loss as Robotic Process Automation (RPA) is having a positive impact on workplaces. One thing is certain, the impact of automation is being felt from the boardroom to the shop floor.
Most business decision makers also believe that RPA (88 percent) and Intelligent Automation (83 percent) are solutions to the global productivity problem and that both RPA (95 percent) and Intelligent Automation (93 percent) are crucially important in driving digital transformation. As evidence of the growing popularity of RPA, more than three-quarters of knowledge workers (78 percent) have experienced some of their daily tasks being automated in the last 12 months.
It’s a good thing too. Over a third of knowledge workers (34 percent) don’t believe their businesses can remain competitive in the next five years with a purely human workforce. This, alongside time-saving, cost-saving and improved accuracy benefits that automation offers, could be amongst the reasons why an incredible 92 percent of business decision makers plan to extend use cases of automation across their businesses.
“A new wave of economics, driven by automation and Artificial Intelligence, is emerging across the globe,” says Chris Bradshaw, Blue Prism’s Chief Marketing Officer. “This technology is disruptive, in the most positive sense. It is changing how organizations view themselves, how they operate and how the people that drive them, live and work. As we enter a new era of connected-RPA, this technology will open doors for the most digitally savvy employees to create and innovate. This is the first technological revolution to place the human at the heart of the creative value chain which is why it has such exponential potential. We will deliver a roadmap for how businesses can transform economic output, with AI and RPA at the heart of that change.”
Change Doesn’t Have to Be Hard
Despite the progress that has already been made, businesses need to address cultural considerations if they are to tap into the technology’s latent potential. In order to increasingly incorporate RPA, two-thirds of knowledge workers agree that their businesses culture needs to evolve. This is because more than half of respondents (53 percent) have colleagues with concerns over the introduction of the technology, and 44 percent aren’t confident about their own ability to adapt to work alongside the digital workforce.
To this end, business decision makers are conscious that they need to build trust among employees and the digital workforce (84 percent). Unfortunately, 68 percent of knowledge workers, believe their employers need to do more to build this trust. Improving internal communications is thought to be the best way to do this by 74 percent of business decision makers and echoed by 67 percent of knowledge workers. Communication is followed by the need for in-depth training (62 percent business decision makers, 59 percent knowledge workers).
The good news is, organizations feel relatively well prepared for changes and are invested in making the adoption of RPA a success. Over three-quarters of business decision makers (76 percent) feel that they are actively on the case of cultural change, incorporating the digital workforce into their daily working practices and encouraging human employees to engage with the technology.
Almost four-in-five knowledge workers (78 percent) do also believe that acquiring new skills is essential to remain employable, which may make the cultural change and adoption process of automation and RPA easier. This is proven by business decision makers (76 percent) who agree their new hires are more prepared to work alongside a digital workforce, and that adopting these technologies is an important factor in attracting and retaining the best talent.
Benefits Outweigh any Challenges
According to business decision makers (94 percent) and to a large extent knowledge workers (73 percent), the benefits of RPA/Intelligent Automation are well understood. However, despite this positive sentiment, there is still a significant gap in understanding between business leaders and their employees. More than three-quarters of business decision makers (76 percent) agree that their organization has been positively impacted by automation, a sentiment that mirrored by 65 percent of knowledge workers.
“Embracing RPA has been a part of the ‘bank-of-the-future’ objective and freeing up colleagues from mundane, repetitive tasks. We’ve taken the robotout of the human, in order to enable those colleagues to fulfil more purposeful roles, as we forge ahead with the next stage of our strategy,” says Gerald Pullen, Head of Continuous Improvement & RPA from Lloyds Banking Group.
“This report proves that there are some dramatic changes ahead in business as far as both technology and the workforce is concerned. But it’s a positive change,” Bradshaw continued. “It is up to the global business community to recognize this and provide the tools that their employees most desire that will release their creativity and innovation.”
As per a new research study from Future Market Insights, the global data centre security market is estimated to exhibit a CAGR of 11.1% during 2018-2028.
As the market penetration of data centres has proliferated, a corresponding increase in the demand for the security of its infrastructure and data has been witnessed worldwide. One of the key trends governing the growth of the global data centre security market is the increasing number of initiatives undertaken by several regional governments in collaboration with various Tier-1 and Tier-2 companies. A majority of these companies are among the leading providers of data centre security solutions and services, securing critical data stored at data centres of respective countries and regions.
According to analysis, in addition to joint initiatives of governments and key players, the data center security landscape has been witnessing a surge in the frequency of introduction of virtualized data centre security solutions via collaborations between multiple channel partners. While these channel partners aspire stronger market presence, their joint activities are likely to create a new trend wave in data center security market. However, owing to increasing enterprise virtualization, physical security solutions are expected to exhibit poor demand growth in the data center security market.
BFSI & Healthcare to Remain Highly Lucrative End-use Segments in Data Center Security Market
On the basis of end use, the report envisages higher rate of adoption of data center security within critical end-use verticals such as IT & telecom, BFSI, healthcare, government & defense, and media & entertainment. However, the rate at which data center security adoption has been rising in case of BFSI and healthcare segments will be particularly higher, reflecting several lucrative growth opportunities for the data centre security market.
Fast Developing Asian Markets Generating High Demand for Data Center Security Solutions & Services
While North America will retain a dominant position in the global data centre security market throughout the forecast period, analysis points to a positive growth outlook emerging for some of the developing Asian countries, in the global data center security market. The report opines that the data center security services and solutions will witness strong demand from markets in East Asia and South Asia. Both the aforementioned regions are more likely to register higher CAGR in terms of revenue over the course of projection period.
Due to the increasing adoption of data centres across all-sized enterprises, including SMEs within rapidly developing economies such as Indonesia, India, and China, report estimates relatively higher demand for data centre security solutions and services within these countries over the coming years. However, as the lack of awareness about significance of data center security prevails among various underdeveloped and some of the developing countries, the consulting segment of the data center security market will exhibit a considerably high growth rate, opines the report.
First keynote presentations announced; Changing customer buying behaviours, the impact of Blockchain, Big Data, Cloud, IoT and trends in Merger & Acquisition activity amongst the subjects being addressed.
The highly successful Managed Services Summit series of events is returning to Amsterdam in May for its third year. The event will examine some of the latest developments impacting the industry and assess the impact of new technologies on the Managed Service Provider (MSP) sector in Europe.
The European Managed Services & Hosting Summit 2019 comes at a time when the business is being subjected to many changes. The sector as a whole is growing and the rising demand for skills has caused a surge in the level of merger and acquisition activity as successful MSPs aim to pick up less fortunate rivals and other types of reseller outside their home regions.
Among the speakers now announced is Gartner Research Director Mark Paine who will deliver the opening keynote examining the changing nature and needs of customers. Under the title “Working with customers and their chaotic buying processes” he will present the Gartner view on how the changed customer buying process has become hard to monitor and follow and can be abruptly fore-shortened. “Who are the real customers anyway?” he will ask, using research into changing buying processes.
Jonathan Simnett, director Hampleton Partners, will examine the latest trends in European IT mergers & acquisitions, the factors driving demand and how to build value into an MSP, reseller or services business. Tech services and support need resources and a lot of more successful MSPs are taking the view that it is cheaper and easier to buy rather than build, to get them. This is increasing demand for limited resources and resulting in buyers bidding up prices. Other factors increasing demand include Big Data, Cloud and IoT which are continuing to drive consolidation in the market, as larger customers look to use what they offer.
Igor Pejic from BNP Paribas will discuss the rise of blockchain and its application to managed services, what it means for every industry and what it offers to MSPs. Blockchain is a different way to handle databases, and although it started in financial services, it will enhance supply chains and anywhere where proof of identity is required. “Managed services is a new area addressed by blockchain. There has been a lot of development and we are now seeing blockchain as a service which allows smaller companies to scale it to what their business needs. We will see a lot of industry-specific applications soon,” says Igor. A copy of Blockchain Babel, Igor Pejic's important new book on blockchain, which has been selected as a book of the month by the FT, is being given free to those attending the Managed Services Summit in Amsterdam.
The European Managed Services & Hosting Summit 2019 is a management-level event designed to help channel organisations identify opportunities arising from the increasing demand for managed and hosted services and to develop and strengthen partnerships aimed at supporting sales. Building on the success of previous managed services and hosting events in London and Amsterdam, the summit will feature a high-level conference programme exploring the impact of new business models and the changing role of information technology within modern businesses. These conference sessions will be augmented by both business and technology breakout tracks within which leading vendors and service providers will provide further insight into the opportunities for channel organisations looking to expand their managed services portfolios. Throughout the day there will also be many opportunities for both sponsors and delegates to meet fellow participants within the Summit exhibition and networking area.
The European Managed Services and Hosting Summit 2019 will take place at the Novotel Amsterdam City Hotel, on 23 May 2019. MSPs, resellers and integrators wishing to attend the convention and vendors, distributors or service providers interested in sponsorship opportunities can find further information at: www.mshsummit.com/amsterdam
The DCS Awards Finalists are all eager with anticipation to find out who will be announced as Winners at the gala ceremony in London on 16 May at the Leonardo Royal St Paul’s Hotel. Thousands of votes have been cast for the Finalists and the competition has been fierce.
The DCS Awards celebrate and reward the products, projects and solutions as well as honour companies, teams and individuals operating in the data centre arena.
There are still a few places left at the awards ceremony so get in touch at awards@dcsawards.com to book your place at what will be a glittering and entertaining evening.
19:00 | Drinks Reception |
19:30 | Dinner Served |
21:00 | Comedian - Zoe Lyons |
21:30 | Awards Presentations |
22:00 | Live Music & Casino |
00:00 | Carriages |
Uninterruptible Power Supplies Ltd (UPSL), a subsidiary of Kohler Co, and the exclusive supplier of PowerWAVE UPS, generator and emergency lighting products, changed its name to Kohler Uninterruptible Power (KUP), effective March 4th, 2019.. UPSL’s name change is designed to ensure the company’s name reflects the true breadth of the business’ current offer, which now extends to UPS systems, generators, emergency lighting inverters, and class-leading 24/7 service, as well as highlighting its membership of Kohler Co. This is especially timely as next year Kohler will celebrate 100 years of supplying products for power generation and protection.
Entertainment Sponsor
Established nearly 90 years ago, Universal Electric Corporation (UEC), the manufacturer of Starline, has grown to become a global leader in power distribution equipment. Originally founded in Pittsburgh, PA USA as an electrical contracting firm, the company began manufacturing in the mid 1950’s.
Category Sponsors
CBRE Data Centre Solutions (DCS) is the leading provider of full-spectrum life cycle services to data centre owners, occupiers, and investors, including consulting services, advisory and transaction services, project management, and integrated data centre operations.
The DCA is a not-for-profit trade association comprising of leaders and experts from across the data centre sector. With over 450 Associate and Corporate members The DCA represents the largest Independent data centre trade association of its kind.
Nlyte Software enables teams to improve how they manage their computing infrastructure across their entire organization – from the laptops to desktops to data centers, from colocation to edge to IoT devices.
Trusted by businesses worldwide we ensure our clients’ critical loads are our priority. Whilst Uninterruptible Power Supplies form the cornerstone of Power Control, our rich history and experience of the entire electrical path enable us to offer much more than just backup power. Our entire product portfolio is meticulously selected so the right solutions can be designed and delivered for our client’s exact power protection requirements.
NaviSite powers business innovation of the enterprise with its comprehensive portfolio of multi-cloud managed services, which spans infrastructure, applications, data, and security. For more than two decades, enterprise and mid-market clients have relied on NaviSite to unlock efficiencies and improve execution capabilities, leveraging a client-focused delivery model that couples deep technical expertise with state-of-the-art global platform and data centres.
Founded in 1986, Riello Elettronica is part of the wider Riello Industries group. Originally a manufacturer of power switching supplies for IT, the Group evolved into uninterruptible power supplies
Schneider Electric is leading the Digital Transformation of Energy Management and Automation in Homes, Buildings, Data Centers, Infrastructure and Industries.
SureCloud is a provider of cloud-based, integrated Risk Management products and Cybersecurity services, which reinvent the way you manage risk.
The converged managed services platform for your journey to successful digital transformation.
The full 2019 shortlist is below:
Data Centre Energy Efficiency Project of the Year
Aqua Group with 4D (Gatwick Facility) | Digiplex with Stockholm Exergi |
EcoDataCentre with Falu Energi & Vatten | Iron Mountain Green Power Pass |
Six Degrees Energy Efficiency | Techbuyer with WindCORES |
New Design/Build Data Centre Project of the Year
Cyrus One – Frankfurt II | IP House supported by Comtec Power |
Interxion supporting Colt Technology Services | Power Control supporting CoolDC |
Siemon – with iColo | Turkcell Izmir Data Centre |
Data Centre Consolidation/Upgrade/Refresh Project of the Year
Alinma Database Migration | Efficiency IT supporting Wellcome Sanger Institute |
Huawei supporting NLDC, Oude Meer | IP House supported by Comtec Power |
PPS Power supporting The Sharp Project, Manchester City Council | Six Degrees Birmingham South Facility |
SMS Engineering supporting Regional Council of Puglia Region, Italy | Sudlows supporting Science & Technology Facilities Council |
Techbuyer supporting University of Cambridge |
Cloud Project of the Year
Christie Data with Clifton College | N2W Software for AWS |
Pulse Secure with Atlassian | Surecloud with Equiom Group |
Timico with The Royal Society of Chemistry (RSC) | Vmware CloudHealth for Adstream |
Zadara with Brandworkz |
Managed Services Project of the Year
Altaro with Chorus | Cristie Data with Hazlewoods |
Pulse Secure with Healthwise | Navisite with Ed Broking |
Timico with Youngs Pubs |
GDPR compliance Project of the Year
Digitronic with HQM Induserv GmbH | GDPR Awareness Coalition supporting Irish SMEs |
Navisite with Ed Broking | Surecloud with Everton FC |
Data Centre Facilities Innovation Awards
Data Centre Power Innovation of the Year
E1E10 - Hotboxx-i | Digiplex - Waste Heat to Warm Homes solution |
APC by Schneider - Smart-UPS | Huawei - FusionPower Solution |
Master Power Technologies - Universal Controller |
Data Centre PDU Innovation of the Year
Raritan - Residual Current Monitoring modules | Servertech - HDOT Cx PDU |
Starline - Cabinet Busway |
Data Centre Cooling Innovation of the Year
Custodian – AHU system Solution | Digiplex – Concert Control |
Mitsubishi - TRCS-EFC-Z | SMS Engineerng – Cooling Containment Solution |
Transtherm and 2bm – Budet-friendly, Compressor-less Cooling Solution | Vertiv - Knurr DCD Cooling Door |
Data Centre Intelligent Automation and Management Innovation of the Year
Nlyte Software – Dedicated Machine Learning Solution | Opengear - IM7216 |
Schneider Electric - EcoStruxure IT Solutions | Siemon - Datacenter Clarity |
Data Centre Physical Connectivity Innovation of the Year
Corning - RocketRibbon | Infinera & Telia - Autonomous Intelligent Transponder (AIT) Solution |
Schneider Electric - HyperPod | Wave2Wave - ROME 64Q and 128Q robotic optical switches |
Zyxel - USG110 Unified Security Gateway |
Data Centre ICT Innovation Awards
Data Centre ICT Storage Innovation of the Year
Archive 360 - Archive2Azure | DataCore and Waterstons - SANsymphony |
Rausch - Sasquatch SDI Appliance | SUSE - Linux Enterprise Server |
Tarmin -GridBank Data Management Platform |
Data Centre ICT Security Innovation of the Year
Chatsworth Products - eConnect Electronic Access Control | Frontier Pitts - Secured by Design |
RDS Tool - RDS-Knight |
Data Centre ICT Management Innovation of the Year
Ipswitch - WhatsUp Gold 2018 | Schneider Electric - EcoStruxure IT |
Tarmin - GridBank Data Management Platform |
Data Centre ICT Networking Innovation of the Year
Bridgeworks - WAN Data Acceleration Solutions | Silver Peak - Unity EdgeConnect SD-WAN Edge Platform |
Wave2Wave - Robotic Optical Management Engine Solution |
Data Centre ICT Automation Innovation of the Year
Morpheus Data - Unified Automation Framework Solution | Wave2Wave - Robotic Optical Management Engine Solution |
Open Source Innovation of the Year
Arista Networks - Arista 7360X Series | Juniper Networks - Native Integration with SONiC |
OVH - Managed Kubernetes Service | SUSE - Manager for Retail |
Data Centre Managed Services Innovation of the Year
Lamda Helix | METCloud |
ra Information Systems | Scale Computing with Corbel |
Schneider Electric |
Data Centre Hosting/co-location Supplier of the Year
ARK Data Centres | Green Mountain |
Iron Mountain | Navisite |
Node4 | Rack Centre |
Systron Micronix | Timico |
Volta Data Centres |
Data Centre Cloud Vendor of the Year
Arcserve | IOMART |
PhoenixNAP | Pulse Secure |
Data Centre Facilities Vendor of the Year
AVK | Cannon Technologies |
CBRE | Dataracks |
EcoCooling | Enlogic |
Johnson Controls | Panduit |
Excellence in Data Centre Services Award
Curvature | Green Mountain |
Iron Mountain | Park Place Technologies |
Rack Centre | UKFast |
Data Centre Manager of the Year
Ole Sten Volland - Green Mountain | Amit Anand - NECTI |
Sunday Opadijo - Rack Centre | Simon Binley - Wellcome Sanger Institute |
Data Centre Engineer of the Year
Abdullah Saleh Alharbi - Saudi Aramco | Sam Wicks - Sudlows |
Sinan Alkas - Turkcell | Turgay Parlak - Turkcell |
Artificial intelligence (AI) is increasingly widespread in everyday life, even if we might not immediately recognise it. For example, AI gives consumers tailored product recommendations based on their recent online purchases, as well as pinpoint-accurate congestion warnings via GPS software. In the business realm, AI adoption within organisations has tripled in the past year, and AI is a top priority for CIOs.
By Chirag Dekate, senior director analyst.
Despite this enthusiasm for AI, early initiatives are prone to failure due to misalignment with business requirements and a lack of agility. Successful AI implementation as a core accelerant of digital business initiatives is reliant on strategic use by infrastructure and operations (I&O) leaders.
Although the potential for success is enormous, delivering business impact with AI initiatives often takes much longer than expected. It is therefore imperative that IT leaders plan early and use agile techniques to increase relevance and success rates.
IT leaders should take account of the following five Gartner predictions concerning the rapid evolution of AI tools and techniques, and how these are likely to apply to their organisation.
AI will drive infrastructure decisions
The use of AI within organisations is growing rapidly. Between now and 2023, AI will be one of the main workloads influencing decisions about infrastructure. Accelerating AI adoption requires specific infrastructure resources that can grow and evolve alongside technology. AI models will need to be periodically refined by the enterprise IT team to ensure high success rates.
Management of increasingly complex AI techniques will require collaboration
One of the main challenges to using AI techniques like machine learning (ML) and deep neural networks (DNNs) in edge and IoT environments is the complexity of the data and the required analytics. Traditional AI use cases that do not involve customer expectations succeed because of close collaboration between business and IT functions, so securing the help of internal engineering teams is essential.
Simple machine learning techniques will sometime make the most sense
Between now and 2022, over 75% of organisations will use DNNs for use cases that could equally be addressed using classical ML techniques. Traditional approaches to ML are too often disregarded. After the hype attached to AI has been dispelled, it quickly becomes obvious that many businesses are preparing to apply deep learning techniques without fully understanding how they apply to their current initiatives. IT leaders should take the time to get acquainted with the full range of options available to them to address the issues facing their business. A simpler, more pragmatic ML approach may be all they need.
Serverless computing will take the stage
Containers and serverless computing will enable ML models to serve as independent functions and, in turn, run more cost-effectively, with low overheads. Although a serverless programming model is particularly appealing in public cloud environments due to its quick scalability, IT leaders should identify existing ML projects that might benefit from these new computing capabilities.
Automation will be adopted beyond the surface level
As the volume of data that organisations have to manage increases, so too will the challenge of ineffective problem prioritisation. Given the shortage of digital dexterity talent in the I&O sector, automation is a prime solution. By 2023, 40% of I&O teams in large organisations will use AI-augmented automation, resulting in higher IT productivity and greater agility and scalability.
Worldwide IT spending is projected to total $3.79 trillion in 2019, an increase of 1.1 percent from 2018, according to the latest forecast by Gartner, Inc.
“Currency headwinds fueled by the strengthening U.S. dollar have caused us to revise our 2019 IT spending forecast down from the previous quarter,” said John-David Lovelock, research vice president at Gartner. “Through the remainder of 2019, the U.S. dollar is expected to trend stronger, while enduring tremendous volatility due to uncertain economic and political environments and trade wars.
“In 2019, technology product managers will have to get more strategic with their portfolio mix by balancing products and services that will post growth in 2019 with those larger markets that will trend flat to down,” said Mr. Lovelock. “Successful product managers in 2020 will have had a long-term view to the changes made in 2019.”
The data center systems segment will experience the largest decline in 2019 with a decrease of 2.8 percent (see Table 1). This is mainly due to expected lower average selling prices (ASPs) in the server market driven by adjustments in the pattern of expected component costs.
The shift of enterprise IT spending from traditional (noncloud) offerings to new, cloud-based alternatives is continuing to drive growth in the enterprise software market. In 2019, the market is forecast to reach $427 billion, up 7.1 percent from $399 billion in 2018. The largest cloud shift has so far occurred in application software. However, Gartner expects increased growth for the infrastructure software segment in the near-term, particularly in integration platform as a service (iPaaS) and application platform as a service (aPaaS).
Table 1. Worldwide IT Spending Forecast (Billions of U.S. Dollars)
| 2018 Spending | 2018 Growth (%) | 2019 Spending | 2019 Growth (%) | 2020 Spending | 2020 Growth (%) |
Data Center Systems | 210 | 15.5 | 204 | -2.8 | 207 | 1.7 |
Enterprise Software | 399 | 9.3 | 427 | 7.1 | 462 | 8.2 |
Devices | 667 | 0.3 | 655 | -1.9 | 677 | 3.5 |
IT Services | 982 | 5.5 | 1,016 | 3.5 | 1,065 | 4.8 |
Communications Services | 1,489 | 2.1 | 1,487 | -0.1 | 1,513 | 1.7 |
Overall IT | 3,747 | 4.0 | 3,790 | 1.1 | 3,925 | 3.6 |
Source: Gartner (April 2019)
“The choices CIOs make about technology investments are essential to the success of digital business. Disruptive emerging technologies, such as artificial intelligence (AI), will reshape business models as well as the economics of public- and private-sector enterprises. AI is having a major effect on IT spending, although its role is often misunderstood,” said Mr. Lovelock. “AI is not a product, it is really a set of techniques or a computer engineering discipline. As such, AI is being embedded in many existing products and services, as well as being central to new development efforts in every industry. Gartner’s AI business value forecast predicts that organizations will receive $1.9 trillion worth of benefit from the use of AI this year alone.”
Twenty-nine percent of CIOs in Germany, Austria and Switzerland (the DACH region) regard digital initiatives as their top business priority, according to a global survey conducted by Gartner, Inc. They cite the drive for operational excellence as their second priority, a deviation from global results, where driving revenue and business growth comes second.
“Our survey results show that DACH CIOs take more ownership in digitizing operations and efficiency, instead of focusing on business growth and revenue,” said Bettina Tratz-Ryan, research vice president at Gartner. “This is due to the fact that CIOs in the DACH region often lack the resources to drive and enable true business transformation. DACH CIOs are involved in enabling the design of new business models — but do not lead these efforts.”
The 2019 Gartner CIO Agenda Survey gathered data from more than 3,000 CIO respondents in 89 countries and all major industries. In the DACH region, 118 CIO respondents were asked for their input.
IT budgets in the region increased on average by 2.7 percent in 2019 — 0.6 percent less than in EMEA. This partly explains the focus of DACH CIOs on core system improvements that modernize and upgrade existing platforms and infrastructure. Those measures decrease costs and improve the efficiency and scalability of digital initiatives.
Overall, resources are a major pain point for DACH CIOs. Fifty-one percent cite insufficient IT-business resources as their most significant barrier to achieving their objectives. Not far behind is a change-blocking business culture at 46 percent. Further, 35 percent of respondents state that insufficient depth and breadth of digital skills are slowing their digital transformation strategies.
DACH CIOs Embrace AI and Chatbots
Thirty-four percent of DACH CIOs regard artificial intelligence (AI) as a game-changing technology for their organizations, followed by data analytics (19%) and cloud (15%). When asked for their top use case for AI, 42 percent cited the use of chatbots as conversational agents.
“Our survey shows that DACH CIOs are actually more likely to invest in chatbots than global CIOs are, and there are already some interesting use cases in the region,” said Ms. Tratz-Ryan. “We often see chatbots in customer management, and some banks have already developed chatbots to handle other tasks such as small money transfers.”
Strengthening Digital Initiatives and Core Systems
Digital business is one of the core investment area for DACH CIOs. Forty-six percent plan to increase their investment in digital business initiatives. Core system improvements and transformations, such as legacy modernizations, are almost as important (40 percent).
“Digital business initiatives and core system improvements might seem like independent areas, but they belong together. Investing in digital transformation means that organizational silos need to become connected in a digital value chain and, for that to happen, the core systems need to become more agile and interactive. This is especially the case with the advanced operational technologies and the machines and sensors used in industry 4.0 scenarios,” Ms. Tratz-Ryan said. “DACH CIOs have understood that connection, and are investing their resources in the right places. Digitalization will lead to increased transparency of workflows, which will allow CIOs to identify duplications and inefficiencies. That, in turn, will increase operational efficiency.”
As user application touchpoints increase in frequency, change in modalities and expand in device type, the future of app development is multiexperience, according to a recent survey by Gartner, Inc.
“Development platform vendors are expanding their value proposition beyond mobile apps and web development to meet user and industry demands,” said Jason Wong, research vice president at Gartner. “The result is the emergence of multiexperience development platforms, which are used in developing chat, voice, augmented reality (AR) and wearable experiences in support of the digital business.”
Most Common Enterprise Applications
Despite the web browser continuing to serve as the most popular application touchpoint, mobile apps are on the rise. As immersive devices such as smartwatches, smartphones and voice-driven devices permeate the industry, the modes of interaction (type, touch, gestures, natural language) expand across the digital user journey.
Among enterprises that have developed and deployed at least three different types of applications (other than web apps), the most common are mobile apps (91 percent). “These figures are higher than any other application types we asked about, and suggest that the maturity of mobile app development is necessary for expansion into other interaction modalities,” said Mr. Wong.
Conversational applications are the second-most widely developed type of application type at 73 percent for voice apps and 60 percent for chatbots, according to the survey. “This reflects the natural evolution of application functions to support the digital user journey across natural language-driven modes and devices,” said Mr. Wong.
Technology Behind Multiexperience Development
Cloud-hosted artificial intelligence (AI) services are the most widely used technology to support multiexperience application development (61 percent of respondents), followed by native iOS and Android development (48 percent) and mobile back-end services (45 percent). “This is consistent with the rise of conversational user interfaces, image and voice recognition and other AI services that are becoming commonplace within apps,” said Mr. Wong.
Business Impact Behind Multiexperience Development
Contrary to the perception that mobile apps are in decline, they are in the lead for applications projected to have the most impact on business success by 2020, according to respondents. Following mobile apps are virtual reality (VR) applications and AR applications. “Although respondents indicated a high level of development activity for chatbots and voice apps, very few thought they’d have the most business impact by 2020,” said Mr. Wong.
Barriers in Developing Multiexperience Development
The top barrier to building compelling multiexperience applications is the need for business and IT alignment, according to nearly 40 percent of survey respondents. More than one-quarter of the respondents identified shortcomings in developer skills and user experience expertise as a barrier. “Skills gap in relation to emerging technologies cannot be overstated when discussing inhibitors to scaling digital initiatives, including multiexperience development strategy,” said Mr. Wong.
Worldwide Public Cloud revenue to grow 17.5 percent in 2019
The worldwide public cloud services market is projected to grow 17.5 percent in 2019 to total $214.3 billion, up from $182.4 billion in 2018, according to Gartner, Inc.
The fastest-growing market segment will be cloud system infrastructure services, or infrastructure as a service (IaaS), which is forecast to grow 27.5 percent in 2019 to reach $38.9 billion, up from $30.5 billion in 2018 (see Table 1). The second-highest growth rate of 21.8 percent will be achieved by cloud application infrastructure services, or platform as a service (PaaS).
“Cloud services are definitely shaking up the industry,” said Sid Nag, research vice president at Gartner. “At Gartner, we know of no vendor or service provider today whose business model offerings and revenue growth are not influenced by the increasing adoption of cloud-first strategies in organizations. What we see now is only the beginning, though. Through 2022, Gartner projects the market size and growth of the cloud services industry at nearly three time the growth of overall IT services.”
Table 1. Worldwide Public Cloud Service Revenue Forecast (Billions of U.S. Dollars)
| 2018 | 2019 | 2020 | 2021 | 2022 |
Cloud Business Process Services (BPaaS) | 45.8 | 49.3 | 53.1 | 57.0 | 61.1 |
Cloud Application Infrastructure Services (PaaS) | 15.6 | 19.0 | 23.0 | 27.5 | 31.8 |
Cloud Application Services (SaaS) | 80.0 | 94.8 | 110.5 | 126.7 | 143.7 |
Cloud Management and Security Services | 10.5 | 12.2 | 14.1 | 16.0 | 17.9 |
Cloud System Infrastructure Services (IaaS) | 30.5 | 38.9 | 49.1 | 61.9 | 76.6 |
Total Market | 182.4 | 214.3 | 249.8 | 289.1 | 331.2 |
BPaaS = business process as a service; IaaS = infrastructure as a service; PaaS = platform as a service; SaaS = software as a service
Note: Totals may not add up due to rounding.
Source: Gartner (April 2019)
According to recent Gartner surveys, more than a third of organizations see cloud investments as a top three investing priority, which is impacting market offerings. Gartner expects that by the end of 2019, more than 30 percent of technology providers’ new software investments will shift from cloud-first to cloud-only. This means that license-based software consumption will further plummet, while SaaS and subscription-based cloud consumption models continue their rise.
“Organizations need cloud-related services to get onboarded onto public clouds and to transform their operations as they adopt public cloud services,” said Mr. Nag. Currently almost 19 percent of cloud budgets are spent on cloud-related services, such as cloud consulting, implementation, migration and managed services, and Gartner expects that this rate will increase to 28 percent by 2022.
“As cloud continues to become mainstream within most organizations, technology product managers for cloud related service offerings will need to focus on delivering solutions that combine experience and execution with hyperscale providers’ offerings,” said Mr. Nag. “This complementary approach will drive both transformation and optimization of an organization’s infrastructure and operations.”
Enterprises around the world are making significant investments in the technologies and services that enable the digital transformation (DX) of their business models, products and services, and organizations. In the latest update to its Worldwide Semiannual Digital Transformation Spending Guide, International Data Corporation (IDC) forecasts global DX spending to reach $1.18 trillion in 2019, an increase of 17.9% over 2018.
"Worldwide DX technology investments are expected to total more than $6 trillion over the next four years," said Eileen Smith, program vice president with IDC's Customer Insights & Analysis group. "Strong DX technology investment growth is forecast across all sectors, ranging between 15% and 20%, with the financial sector forecast to be the fastest with a compound annual growth rate (CAGR) of 20.4% between 2017 and 2022."
The two industries that will invest the most in digital transformation in 2019 are discrete manufacturing ($221.6 billion) and process manufacturing ($124.5 billion). For both industries, the top DX spending priority is smart manufacturing, supported by significant investments in autonomic operations, manufacturing operations, and quality. Retail will be the next largest industry in 2019, followed closely by transportation and professional services. Each of these industries will be pursuing a different mix of strategic priorities, from omni-channel commerce for the retail industry to digital supply chain optimization in the transportation industry and facility management – transforming workspace in professional services. A CAGR of 21.4% will enable the professional services industry to move ahead of transportation in terms of overall DX spending in 2020.
The DX use cases – discretely funded efforts that support a program objective – that will see the largest investment across all industries in 2019 will be autonomic operations ($52 billion), robotic manufacturing ($45 billion), freight management ($41 billion), and root cause ($35 billion). Other use cases that will see investments in excess of $20 billion in 2019 include self-healing assets and augmented maintenance, intelligent and predictive grid management for electricity, and quality and compliance. The use cases that will experience the greatest spending growth over the 2018-2022 forecast period are virtualized labs (108.6% CAGR), digital visualization (53.5% CAGR), and augmented design management (43.9% CAGR).
From a technology perspective, hardware and services investments will account for more than 75% of all DX spending in 2019. Services spending will be led by IT services ($154 billion) and connectivity services ($102 billion). Hardware spending will be spread across several categories, including enterprise hardware, personal devices, and IaaS infrastructure. DX-related software spending will total $253 billion in 2019. The fastest growing technology categories will be IaaS (35.9% CAGR), application development and deployment software (26.7% CAGR), and business services (26.5% CAGR).
"Digital transformation is quickly becoming the largest driver of new technology investments and projects among businesses," said Craig Simpson, research manager with IDC's Customer Insights & Analysis group. "It is already clear from our research that the businesses which have invested heavily in DX over the last 2-3 years are already reaping the rewards in terms of faster revenue growth and stronger net profits compared to businesses lagging in DX initiatives and investments."
The United States and China will be the two largest geographic markets for DX spending, delivering more than half the worldwide total in 2019. In the U.S., the leading industries will be discrete manufacturing ($63 billion), professional services ($37 billion) and transportation ($34 billion) with DX spending focused on IT services, applications, and enterprise hardware. In China, the industries spending the most on DX will be discrete manufacturing ($55 billion), process manufacturing ($31 billion), and state/local government ($21 billion). Connectivity services and enterprise hardware will be the largest technology categories in China.
Worldwide Services revenue crossed $1 trillion mark in 2018
Worldwide revenues for IT Services and Business Services totaled $513 billion in the second half of 2018 (2H18), an increase of 4.5% year over year (in constant currency), according to the International Data Corporation (IDC) Worldwide Semiannual Services Tracker.
For the entire year, worldwide services revenues crossed the $1 trillion mark in 2018. Annual growth accelerated slightly to 4.3%, outstripping the worldwide GDP growth by more than half a percentage point. This largely reflects overall healthy corporate IT spending sustained by large enterprises' cautious yet optimistic business outlook.
Looking at different services markets, project-oriented revenues (i.e. consulting, integration, application development, etc.) continued to outpace outsourcing and support & training. They grew by 6.4% year over year in 2H18 to $194 billion and 5.8% to $380 billion for the entire year. The growth was led largely by business consulting and application development markets. Business consulting grew 9.1% to $63 billion in 2H18 and 8.3% to $123 billion for the year. Custom application development (CAD) grew 8.3% to almost $24 billion in 2H18 and 7.5% to $46 billion for 2018 (compared with only 5.1% in 2017). Market growth was largely due to strong results in the United States. As traditional U.S. enterprises and government agencies continue to tackle and adopt digital transformation, strategic consulting remains critical in larger projects. Digital transformation is also driving up new application development work – not just "new apps" but also upgrading "legacy apps." The accelerated growth in CAD coincides with the strong rebound on the software side.
In managed services, revenues grew 3.8% to $240 million in 2H18 and 3.6% to $473 million for 2018, which is on par with real worldwide GDP growth. Application-related managed services revenues (hosted and on-premise application management) outpaced infrastructure and business process outsourcing (BPO), growing by 5.8% to $41 billion in 2H18 and 5.6% to $80 billion for 2018. Like application project work (CAD), application outsourcing serves as a vehicle for buyers to access new app skills (i.e. cloud, analytics, machine learning, etc.), as well as modernizing legacy apps via external providers. IDC expects application-related managed services to continue to out perform other outsourcing segments.
IT Outsourcing (ITO) continued to decline due to flat or negative growth in the mature geographic markets. This was offset somewhat by moderate growth in horizontal business process outsourcing (BPO).
On a geographic basis, the United States, the largest services market, grew by 4.8% to $233 million in 2H18 and 4.6% to $459 million for 2018, a moderate acceleration. Strong economic growth in the U.S. despite policy uncertainties, coupled with moderate but steady government spending increases, have kept both corporate and government IT spending robust. Funding for new projects to acquire new capabilities and tools offset continuing downward pressure on commodity services.
Western Europe, the second largest market, grew by almost 3% to $266 billion for 2018, much slower than the U.S. but in-line with IDC's previous estimate and more than twice as fast as real GDP growth for the region. This was driven largely more application-related activities in the region, notably CAD and application outsourcing.
Asia/Pacific (excluding Japan) (APeJ) growth cooled slightly to 6.2% with revenues of $110 billion, partially reflecting economic angst over the impending trade war between the U.S. and China, and the economic slow-down in key mature markets (i.e. Australia/New Zealand, South Korea). Japan enjoyed a slight growth uptick, as business results from the major Japanese services vendors posted slightly higher than expected in 2H18, mainly due to continuing demands for system renewal. Other emerging markets in the region continued to show robust growth (i.e. India, the Philippines, Indonesia, Vietnam, etc.) However, their impact on growth was limited by their size. Overall growth for the entire APeJ region remains at around 5% for 2018.
In other emerging markets, both Latin America and Central & Eastern Europe (CEE) saw faster growth in 2018 than the previous year. Except for Venezuela and Argentina, and to some degree Colombia, major Latin American markets are in economic recovery, which drove both corporate and government IT spending. All foundation markets showed better growth last year. In CEE, most major geographic markets grew between 6% and 15%, mainly boosted by dynamic economic growth and increased tax revenues. However, in sheer revenue size, CEE is still the smallest geographic market.
Global Regional Services 2H18 Revenue and Year-Over-Year Growth (revenues in $US billions) | ||
Global Region | 2H18 Revenue | 2H18/2H17 Growth |
Americas | $267.6 | 4.9% |
Asia/Pacific | $87.6 | 5.8% |
EMEA | $158.6 | 3.3% |
Total | $513.9 | 4.5% |
Source: IDC Worldwide Semiannual Services Tracker 2H 2018 |
"Steady growth in the services markets are driven by a continued demand for digital solutions across the regions with the Americas continuing to contribute to the bulk of the revenue growth," said
Lisa Nagamine, research manager, IDC's Worldwide Semiannual Services Tracker. "2018 surpassed the trillion-dollar mark, as we had forecasted at the end of 2017. We expect future growth in many geographies worldwide in coming years."
"More sustained U.S. economic growth, at least compared to other mature economies, allowed large government agencies and traditional businesses to spend more on new projects in recent years," said Xiao-Fei Zhang, program director, Global Services Markets and Trends. "Additionally, digital disruption and global competition have also stoked their digital fear – go digital or go broke."
Worldwide revenues for big data and business analytics (BDA) solutions are forecast to reach $189.1 billion this year, an increase of 12.0% over 2018. A new update to the Worldwide Semiannual Big Data and Analytics Spending Guide from International Data Corporation (IDC) also shows that BDA revenues will maintain this pace of growth throughout the 2018-2022 forecast with a five-year compound annual growth rate (CAGR) of 13.2%. By 2022, IDC expects worldwide BDA revenue will be $274.3 billion.
"Digital transformation is a key driver of BDA spending with executive-level initiatives resulting in deep assessments of current business practices and demands for better, faster, and more comprehensive access to data and related analytics and insights," said Dan Vesset, group vice president, Analytics and Information Management at IDC. "Enterprises are rearchitecting to meet these demands and investing in modern technology that will enable them to innovate and remain competitive. BDA solutions are at the heart of many of these investments."
IT services will be the largest category of the BDA market in 2019 ($77.5 billion), followed by hardware purchases ($23.7 billion), and business services ($20.7 billion). Together, IT and business services will account for more than half of all BDA revenues throughout the forecast and will be among the categories with the fastest growth. BDA-related software revenues will be $67.2 billion in 2019, with end-user query, reporting, and analysis tools ($13.6 billion) and relational data warehouse management tools ($12.1 billion) being the two largest software categories. The BDA technology categories that will see the fastest revenue growth will be non-relational analytic data stores (34.0% CAGR) and cognitive/AI software platforms (31.4% CAGR).
In terms of deployment, more than 70% of BDA software revenues in 2019 will go toward on-premises solutions. However, revenue for BDA software delivered via the public cloud will experience very strong growth over the five-year forecast (32.3% CAGR) and will represent more than 44% of the total BDA software opportunity in 2022.
"Big Data technologies can be difficult to deploy and manage in a traditional, on premise environment. Add to that the exponential growth of data and the complexity and cost of scaling these solutions, and one can envision the organizational challenges and headaches. However, cloud can help mitigate some of these hurdles. Cloud's promise of agility, scale, and flexibility combined with the incredible insights powered by BDA delivers a one-two punch of business benefits, which are helping to accelerate BDA adoption," said Jessica Goepfert, program vice president, Customer Insights & Analysis at IDC. "When we look at the opportunity trends for BDA in the cloud, the top three industries for adoption are professional services, personal and consumer services, and media. All three industries are rife with disruption and have high levels of digitization potential. Additionally, we often find many smaller, innovative firms in this space; firms that appreciate the access to technologies that may have historically been out of reach to them either due to cost or IT complexity."
The industries currently making the largest investments in big data and business analytics solutions are banking, discrete manufacturing, professional services, process manufacturing, and federal/central government. Combined, these five industries will account for nearly half ($91.4 billion) of worldwide BDA revenues this year. The industries that will deliver the fastest BDA growth are securities and investment services (15.3% CAGR) and retail (15.2% CAGR). Retail's strong growth will enable it to move ahead of federal/central government as the fifth largest industry in 2022.
On a geographic basis, the United States will be the largest country market by a wide margin with nearly $100 billion in BDA revenues this year. Japan and the UK will generate revenues of $9.6 billion and $9.2 billion respectively this year, followed by China ($8.6 billion) and Germany ($7.9 billion). The fastest growth in the BDA market will be in Argentina and Vietnam with five-year CAGRs of 23.1% and 19.4%, respectively. China will have the third fastest growth rate with a 19.2% CAGR, which will enable it to become the second largest country for BDA revenues in 2022.
From a company size perspective, very large businesses (those with more than 1,000 employees) will be responsible for nearly two thirds of all BDA revenues throughout the forecast. Small and medium businesses (SMBs) will also be a significant contributor to BDA revenues with nearly a quarter of the worldwide revenues coming from companies with fewer than 500 employees.
Artificial intelligence (AI) systems spending will reach $5.2 billion in Europe in 2019, a 49% increase over 2018, according to International Data Corporation's (IDC) Worldwide Semiannual Artificial Intelligence Systems Spending Guide. AI solution adoption and spending are both growing at a fast pace in Europe, where companies are moving beyond experimentation to the actual implementation of use cases. In fact, 34% of European companies have already adopted or will have adopted AI by the end of this year across a wide variety of use cases, according to IDC's European Vertical Markets Survey 2018–2019. By 2022, European spending in AI will reach $13.5 billion, reflecting fast-growing interest in AI technologies.
AI is a big topic in Europe — it's here and it's here to stay. AI can be the game changer in highly competitive environments, especially across consumer-facing industries such as retail and finance, where AI has the power to push customer experiences to the next level with virtual assistants, product recommendations, or visual searches. "Many European retailers, such as Sephora, ASOS, and Zara, as well as banks such as NatWest and HSBC, are already experiencing the benefits of AI - including increased store visits, higher revenues, reduced costs, and more pleasant and personalized customer journeys. Industry-specific use cases related to automation of processes are becoming mainstream and the focus is set to shift toward next-generation use of AI for personalization or predictive purposes," said Andrea Minonne, senior research analyst, IDC Customer Insight & Analysis in Europe.
Healthcare is a big bet in the long term in Europe. A fragmented European data regulation landscape, the GDPR, and public sector budgets pose barriers to extensive investments in AI across health organizations. But by 2022, healthcare will be the fastest growing industry for AI investments, outlining that European healthcare organizations have acknowledged the benefits of AI but will take their time in their full AI implementation journey.
Investments in AI are also supported by governmental and sectorial deals, which are aimed at boosting competitiveness and digital transformation through extensive investments in AI. The European Commission, for example, is increasing investments under the "Horizon 2020" innovation program, injecting €1.5 billion throughout 2020 to support AI research centers across Europe. Other key government-supported investments include the 2018 AI Sector Deal in the U.K. (worth around £1 billion), which aims to boost the competitiveness of the U.K. in the AI market. Similarly, the French government pledged to invest €1.5 billion of public funding in AI by 2022 to drive innovation across French companies and compete with bigger AI markets such as the U.S. and China.
By Steve Hone, CEO The DCA
The month’s DCA Journal theme focuses on one of the four main objectives within the Data Centre Trade Association and that is “Insight”
You would think that articulating the valuable role the data centre sector plays in an increasingly reliant digital world would be an easy nut to crack given the myriad of communication options at our disposal these days. But how much of this interaction represents valuable insight and how much is considered by the recipient as nothing more than scatter gun distracting noise.
The trade association works hard to get this balance right by providing an environment where innovative ideas can be shared and trusted. Knowledge can be sourced by those actively seeking advice and guidance. Collaboration with members and partners is the key to converting the technical jargon we love to use into content stakeholders can understand, relate to and value.
We have found that content marketing activities are most effective at delivering long term results when pulling one’s audience in; rather than just simply pushing messages out. Research previously conducted by the Economist Group would tend to back this up with customers far more likely to question the legitimacy of content which continually links back to a specific brand or service and conversely far more receptive to content focused on the customer needs rather than the product being sold.
“According to research, 75% of business executives surveyed, put a higher value on content focused on helping them generate new ideas to improve and strengthen their businesses; and it was equally interesting to note that virtually the same percentage surveyed (71%) said they actually are put off by content which reads more like a sales pitch”. Ask yourself how many presentations have sat through that fit this profile!
Through direct feedback, the DCA has found stakeholders to be far more likely to trust high-quality content when it helps them research a business idea, gain insight into the marketplace, or gain knowledge of an area in their own business. Interestingly we also found that irrespective of ones’ market share or business standing, maximum impact and credibility is gained when content is peer reviewed and published via an independent source with no conflict of interest.
There is no doubt that B2B content creation can be both a powerful and extremely effective tool in gaining the interest and trust of a prospective client, however in isolation it should not be seen as the “silver bullet” which instantly gets that sales bell ringing to justify an ROI threshold, rather it should form part of what should be a much larger “go to market” strategy.
Sure, you want to sell your product, but forcing it on customers that aren't ready for it won't help. Addressing their needs, however, is a big “YES” which the DCA totally endorse. When that customer eventually reaches the buying stage, they'll have a connection between the solution they need and the team that helped them define it which will deliver the ROI you are seeking.
In summary- the truth is, you will find most of the visitors to your site probably aren't ready to buy. They're simply looking for answers, gathering options and trying to understand what solution is right for them. “To really help your customers - share your knowledge not your product portfolio and the best way to do this is through your own data centre trade association”.
To find out more, please give as a call (0)845 873 4587 or email: info@dca-global.org
By Richard Clifford, Head of Innovation, Keysource
With heightened competition driving the need for new efficiencies to be found across data centre estates, Richard Clifford, head of innovation at critical environment specialist Keysource, discusses some of the key drivers for change in the data centre market.
With increased competition and tighter margins comes new impetus for operators to identify efficiencies. Implementing more efficient cooling systems and streamlining maintenance procedures are well explored routes to doing this, but they also represent low hanging fruit in terms of cost savings. Competition in the co-location and cloud markets is heating up, and so data centre operators are going to have to be more imaginative if they are to stay ahead of the curve.
Some notable trends are likely to accelerate over the next five years and operators would be wise to consider how they can be incorporated into their estates.
The resurgence of the edge-of-network market is one. This relies on a decentralised model of data centres that employ several smaller facilities, often in remote locations, to provide an ‘edge’ service. This reduces latency by bringing content physically closer to end users.
The concept has been around for decades, but it fell out of favour with businesses with the advent of large, singular builds, which began to offer greater cost-efficiencies. That trend is now starting to reverse, due in part to the rise of the Internet of Things and a greater reliance on data across more aspects of modern life. Growing consumer demands for quicker access to content is likely to lead to more operators choosing regional, containerised and micro data centres.
Artificial Intelligence (AI) is also set to have a transformational impact on the industry. Like many sectors, the potential of AI is becoming a ubiquitous part of the conversation in the data centre industry, but there are few real-world applications of it in place. Currently complex algorithms are used to lighten the burden on management processes, for example, some operators are using these systems to identify and regulate patterns in power or temperature consumption that could indicate an error or inefficiency within the facility. Managers can then deploy resources to fix it before it becomes a bigger problem or risks downtime. Likewise, they can also be used to identify security risks, for example recording if the data centre has been accessed remotely or out of hours and reporting any unusual behaviour.
This is still in early stages of development and at the moment, AI still relies on human intervention to make considered decisions, rather than automatically deploying solutions. But as the industry learns to embrace this tool more, we’re likely to see its capability expand. Specialist research projects such as IBM Watson and Google DeepMind are already focusing on creating new AI systems that are self-aware which can be incorporated into a cloud offering and solve problems independently, lessening the management burden even further.
As the implementation of edge networks grows, it is likely that AI will have a greater role in managing facilities remotely. To work successfully, edge data centres must be adaptable, modular and remotely manageable as ‘Lights Out’ facilities, serviced by an equally flexible workforce and thorough management procedures – a perfect example of where AI can pick up the burden. Likewise, storing information in remote units brings increased security risks and businesses will need to consider a vigilant approach to data protection to meet legal obligations and identify threats before they cause damage. Introducing AI algorithms that can remotely monitor security and day-to-day maintenance will go some way to reassuring clients that these risks can be mitigated through innovation.
Innovation must be a carefully considered decision for data centre operators. Implementing an innovative system represents a significant capital investment and it can be difficult to quantify a return. New processes need to be adopted early enough to give a competitive advantage, while caution needs to be exercised to avoid being the first to invest in brand new technology only for it to become obsolete a year later. Striking a balance between these two considerations will be key for data centre operators looking to grow their market share in such a competitive sector - despite the risk, when innovation works successfully the payoff can be huge.
For information http://www.keysource.co.uk/
Colocation Facility Guidelines & Colo Solution Provider (SP) OCP Ready™ Programs
By Mark Dansie OCP Data Centre Facility Subject Matter Expert
Open Compute Project (OCP) designs have already been implemented by the operators of Hyperscale Data Centres such as Facebook, Google and Rackspace and also companies such as Fidelity Investments, CERN, the London Internet Exchange (LINX) and Telefonica in Spain have taken advantage of this new wave of open hardware. These are great examples of how the ‘open’ concept can help investment banking, Internet Peering, Research and Telecommunications organisations prosper.
As enterprises move their compute requirements to the Cloud and telecommunication operators convert their central offices and telephone exchanges to data centres via the work of initiatives such as The Telecom Infra Project (TIP) and the OCP Telco project, the operators of colocation facilities such as Kao Data and Switch Datacentres already understand that to thrive as a business and to be able to support the new applications being developed, such as IoT, they will need to be able to provide edge, metro and centralised cloud computing. It is envisioned that the majority of these methods of cloud computing will run on an OCP design, which will then enable data centre facilities and their tenants to meet the increasing demand for their services at scale at the lowest possible CAPEX and OPEX.
To assist data centre operators and their tenants across the world with understanding the facility requirements that will be needed to enable smooth and trouble free deployment of Open Racks, an OCP Data Centre Facility sub-project was formed and tasked to produce a colocation facility guidelines checklist document for the deployment of Open Compute Project Racks.
Colocation Facility Guidelines for Deployment of Open Compute Project Racks & Checklist
Interest in the project has grown from both the operators of data centre facilities, who see the value of it as an aid to transform their data centres, into one that is capable of handling the next generation of cloud computing, and from enterprises that want to be sure the colo facilities that they are looking to deploy their OCP IT Gear into are OCP Ready™.
The initial project work to create a guidelines and checklist document, which has now been published and is available from the Contributions Section of the OCP website or Data Centre Facility Project WIKI, has been focused on defining the data centre sub system requirements that a European data centre facility would need to provide to accommodate the latest Version 2 design for the Open Rack, that, when populated, could weigh up to 500 kg and have a maximum IT load of 6,6 kW. However, within the Open Rack design there is the capability to deploy it in all regions of the world and support a much higher IT load e.g. 36 kW and up to 1400 kg in weight, it was decided that to create a minimal viable product (MVP) document as quickly as possible it would be best to restrict the checklist objectives to one that was less complex. Also, the project team considered that if the minimum ‘must-have’ requirement was set at this lower level, it would allow up to 80% of the existing colo facilities in Europe to be able to accommodate an Open Rack and therefore aid in the adoption of OCP.
Classification headings
Within the checklist the attributes of each data centre sub system have been assessed and listed with classification headings of ‘must-have’, ‘nice-to-have’ or ‘considerations’. The parameters of each attribute have then been inserted into one of two columns, with the headings of ‘acceptable’ or ‘optimum’. The ‘must-have’/ ‘acceptable’ attributes have been considered by the project team as the minimum requirement needed to be provided by the colo to accommodate an Open Rack V2, which weighs a maximum of 500kg when populated, and a maximum IT load of 6,6 kW.
The ‘nice-to-have’ attributes are viewed as not essential for a deployment, but could be beneficial based on a particular scenario. The attributes under the classification heading of ‘considerations' are those which are usually tenant specific requirements. There is also guidance information within the checklist for an attribute’s parameter to be considered as optimum and, if implemented by the data centre or tenant, would enable the full benefits of the Open Rack design to be achieved.
Segments
The checklist has been segmented into the sub-system areas below for consideration by the data centre facility or tenant:
Architectural / Data Centre Access
This section of the checklist considers the requirements needed to allow a fully-populated crated rack to be brought into the data centre from the point of off-loading from the delivery vehicle, and then brought into the facility via the loading bay or dock to the goods-in area. The many attributes that have been considered and included in the checklist range from a ‘must-have’/ ‘acceptable’ parameter of the delivery at road level with no step and threshold free, to a ‘must-have’/ ‘optimum’ which is a loading dock with an integral lift that would allow packaged racks on pallets to be transported directly from inside the truck level to the data centre goods in area.
The ‘must-have’/ ‘acceptable’ parameter for the delivery pathway would be 2,7 m high x 1,2 m wide, as this would provide sufficient height and width clearance in the doorway leading to the goods-in and unboxing locations. It is also typical for ramps to be found in data centre facilities, so it is important that the gradient of any ramp in the delivery pathway is known, as a fully populated Open Rack weighing 1500 kg would prove very difficult to move up a ramp that was steeper than a 1:12 incline.
Other ‘must-have’ attributes that have found their way onto the list, which can be very important to enable a smooth deployment, include specifications for the delivery pathway within the data centre, such as height and width of door openings in corridors and the maximum weight a lift can carry.
Architectural/ White Space
In the checklist, a number of structural attributes for a data centre have been considered, with many classed as ‘must-have’. Open Racks are heavy in nature and many of the traditional colos built even as recently as 10 years ago were not designed to accommodate Pods of 24 racks, with each rack weighing between 500 kg to 1500 kg, so a ‘must-have’ / ‘acceptable’ parameter for the access floor uniformed load to support a 500 kg rack would be 732 kg/m2 (150 lb/ft2)(7.17 kN/m2).
Electrical Systems
The IT gear within an Open Rack is powered by one or two rack mounted power shelves, containing AC to 12V DC rectifiers, which distribute 12V or 48V via busbars in the back of the rack to the equipment. This power shelf can also contain lithium Ion batteries that would act as the battery backup (BBU) and therefore providing a benefit for a colo to not have to provide a centralised upstream UPS supply.
For a data centre in the EU to be able to accommodate an Open Rack that has an IT load of 6,6 kW, a ‘must-have’ / ‘acceptable’ requirement would be to provide a rack supply, fed by a central upstream UPS with a capacity of 3 phase 16Amp, with a receptacle compatible with IEC60309-2 5 wire. The ‘nice-to-have’ attribute, which has been categorised as ‘optimum’ within the checklist, as it provides an opportunity to be more energy efficiency and resilient, would be for the data centre to provide a supply to the rack that was not from the central upstream UPS but from the UPS input distribution board. Considerations for a data centre and tenant would be to understand the generator start-up time if the racks were reliant on the battery backup unit (BBU) of the power shelf to be the UPS, so as to ensure that there was sufficient autonomy time to keep IT gear functioning before the generator set comes online.
Cooling
One of the many advantages of the Open Rack design is that all servicing and cabling of the equipment in the rack can be carried out at the front, so if the racks are contained in a hot aisle then maintenance personnel will need never enter that space, which is normally very uncomfortable to work in; therefore, it has been considered as a ‘nice-to-have’/ ‘optimum’ arrangement to have a hot aisle containment system. The ‘must-have’ attributes in this section of the check list include either hot aisle or cold aisle containment, front to back air flow and inlet temperature, and humidity within the ASHRAE-recommended limits.
Telecommunication Cabling, Infrastructure, Pathways and Spaces
The ‘must-have’/ ‘acceptable’ arrangement for routing network cabling into an Open Rack would be either top or bottom entry and to the front of rack. A ‘nice-to-have’/ ’optimum’ parameter for routing cabling into racks for network connectivity would be to be feed from only the top of the rack and to the front.
Network Infrastructure
In this section of the checklist there are only ‘considerations’ listed, as this aspect of the design is very much specific to the needs of the tenant’s use case. Attributes to be considered by the tenant include maximum link distance between Spine & Leaf network switches, transmission speeds of Top of Rack (TOR) switches, media type for TOR to Leaf and Leaf to Spine connectivity.
The OCP Colo Solution Provider (SP) and OCP Ready™ Programs
As a result of these guidelines OCP has launched a new program developed for data centre facility operators who are interested in having their data centre branded OCP Ready™. The Colo Solution Provider (SP) Program is designed to recognise those organisations with data centre facilities which have met the OCP Colo Guidelines. Data centre operators and data centre tenants whose infrastructure is located in a colocation facility can take advantage of the efficiency gains made by deploying OCP technologies. In order to be eligible for these programs a company must be a current OCP corporate member.
Date centre operators can obtain an OCP Ready™ certification mark for their facility by following a few easy steps.
To start the process the data centre operator reaches out to the data centre facility (DCF) Project leads Brevan Reyher (Rackspace) or ‘OCP Ready’ EMEA lead Mark Dansie (InflectionTech)
The DCF project lead or EMEA lead requests that the site assessment scorecard is completed by the data centre operator. Once the site assessment scorecard has been completed and is ready for review by the Community, the DCF Project lead will arrange for the data centre operator to attend a monthly DCF project call and present their data centre to the Community.
During the call, the data centre operator presents the results of the checklist and supporting evidence (drawings, commissioning data, etc.). The Community can then ask clarifying questions and once the DCF project members and DCF project lead are satisfied the process moves up to the OCP Incubation Committee (IC) for a final vote.
Upon approval the data centre operator obtains the OCP Ready™ Certification for that facility and is eligible to become an OCP Colo SP and list their facility on the OCP Marketplace.
More information
If you would like to know more here are some useful links.
• Facility Recognition Program
• How to Become an OCP Colo Solution Provider
• Data Centre Facility Project
• Data Centre Facility Project Mailing List
• Data Centre Facility Project Colo Solution Provider Program Wiki
Or ask Mark Dansie who is an OCP data centre facility subject matter expert: mark.dansie@opencompute.org
@markdansie
Colin Dean Managing Director Socomec U.K. Limited
Transformational change is snapping at our heels, every minute of every day. It’s relentless. It transpires that lawyers of Artificial Intelligence are better at predicting the outcome of cases than the real thing. Who knew? No sooner have we got our tiny minds around driverless cars, we are asked to consider these robotic forms of transport with their own moral compass – capable of deciding our fate, and that of others, should we be involved in an accident. Whilst the human race might vote for a car that opts to preserve us in number at the expense of the driver, which of us would choose to travel in that particular vehicle? Who could have predicted the day that a petrol-head is more interested in algorithms than acceleration. It’s functionality that makes satellite navigation look positively vintage.
Advances in technology that are so interdisciplinary in nature that they challenge our understanding of our place in the universe, and are embedded in our being, the world we know is looking increasingly … unfamiliar.
The fourth industrial revolution?
A more connected approach in this interdisciplinary world is fast becoming a vital component in the growth of Industry 4.0. The integration and consolidation of existing solutions – along with the introduction of new digital technologies – is created a more flexible and joined-up industrial model.
The drive towards common digital architecture is maximising the potential value of the IoT and the result is unsurpassed supply chain efficiency and the efficiency benefits of more automated, centralised systems.
Furthermore, advanced analytics that are integrated with the wider digital infrastructure can drive innovation, efficiencies and improvements in quality – in timescales more compressed than ever before.
The convergence of the new and old worlds – our digital businesses with our bricks and mortar infrastructures – puts pressure on every industrial sector to rethink business processes in order to make the most of new opportunities whilst also mitigating risk.
This dramatic change in the way that we capture and use data has propelled those responsible for managing the buildings and facilities that house big data into the corporate spotlight.
Are we ready to evolve?
Whilst we may understand the principles of the digital revolution, and have a fair grasp of the associated opportunities and risks, applying the necessary changes to our own environment – our own organisation – is typically approached with caution. In order to remain competitive, however, it’s becoming increasingly important to prepare for change – in order to thrive no matter what the future may hold.
Developing a more rounded awareness of advances in technology – and how they might be applied in the context of a specific organisation and its unique environment – can help harness the positive aspects of change whilst minimising risk during this profound period of convergence between man and machine, big data in an ever shrinking world.
A new electrical ecosystem
In terms of electrical infrastructure, the guaranteed functionality of the new breed of ecosystem is paramount. Power continuity, reliability and optimised efficiency are key drivers of competitiveness – in an increasingly competitive environment.
The digital revolution is driving an increase in computational power – with supercomputers in our pockets, connectivity has become the norm. Offline is not an option; for any organisation, the consequences of downtime are far reaching and can impact public safety, as well as business continuity. Today’s high performance power supplies need to achieve maximum operational uptime – as well as delivering a fast return on investment.
To remain relevant, components within an electrical infrastructure need to deliver beyond our expectations, with unprecedented performance and the ability to seamlessly integrate into an existing architecture whilst being robust and flexible enough to cope with an unknown future state.
When world’s converge
Next generation UPS solutions are emerging that bring together digital solutions with the world of energy to minimise consumption and emissions, optimise equipment lifespan and ensure total reliability. The most advanced systems will help to drive innovation, introduce greater efficiencies and improve performance levels.
Digital native – ready for Industry 4.0
The latest development from Socomec is the result of this new approach and is based on proven Masterys technology – a UPS solution that has been efficiently protecting the supply of critical applications around the world since its inception in 2004 as the first 3 level topology system. With more than 90,000 units deployed in the field, it has won the trust, approval and certification from the most demanding users.
The fourth generation Masterys UPS combines the Socomec’s proven technology with new and unsurpassed performance in terms of reliability and service level – and is equipped for today’s “smart factories”. Colin Dean explains; “A true digital native, the fourth generation Masterys has been borne out of the digital revolution – and is ready for the requisites of Industry 4.0.”
Making next generation performance accessible to all
The data we consume so voraciously, however, comes at a cost. With energy prices rising continuously, and floor space at a premium, power density and the optimisation of infrastructure are under pressure – especially as the cost of powering a data centre, for example, can outstrip the cost of the computing horsepower that drives the facility.
Colin Dean continues; “Managing costs and resource - whether developing or adopting the latest smart solutions - requires a careful balancing act. Every electrical infrastructure has its own specific set of requirements – which is why we have engineered our UPS protection to be customized accordingly, whether delivering ultra-high performance solutions, or more general purpose solutions that deliver value for money without compromising on performance. Both ranges within the latest Masterys development have been designed to be easily configured – even during order processing – and they can also be adapted to the needs of existing installations.”
Augmented Reality: UPS recognition and data acquisition
By integrating smart technology within an electrical infrastructure, it is possible to develop an unparalleled understanding of sites, buildings and processes. This new connectivity - combined with a universal view of operating parameters - enables a reduction in energy consumption, costs and emissions and makes the deployment of resources more efficient.
Colin Dean explains; “The installation and commissioning of any UPS is fundamental to ensuring functionality and optimized performance. By considering the product from the perspective of our customers as well as end users – and harnessing the power of the latest digital and augmented reality technology - we have created a disruptive approach to the way that UPS are installed.”
A smart new approach
E-WIRE – the first app in the world specifically designed to support UPS installations - simplifies the installer’s job, improves power supply reliability and ensures operator safety. Using augmented reality technology, E-WIRE recognizes the UPS to be installed via the installer’s smartphone camera. The app will then automatically download all relevant information pertaining to that UPS in order to fully support the installation.
Providing step-by-step instructions, installation is fast and foolproof – from positioning the UPS, to verifying electrical protection and even providing a guide to cabling both the UPS and the battery system.
When the installation is complete, E-WIRE asks the installer to perform a series of checks and balances, including electrical measurements. A report is then sent to the Socomec Services Center to validate the operation and authorize the commissioning.
Fit for the future
By combining the latest technology with specialist services and training from professional partners, it is now possible to design, install, commission, monitor and maintain an intelligent clean infrastructure that is compatible with the next generation of smart facilities.
The power demands placed upon hard-working electrical infrastructures are, however, evolving rapidly. The resilience of a facility – the ability to remain operational even when there has been a power outage, hardware failure or other unforeseen disruption – is becoming increasingly critical for every Facility Manager.
Unexpected events can be hugely detrimental to the new smart factory and its IOT devices – resulting in significant direct costs and crippling consequential losses. When a typical server system can experience more than 125 events each month – prevention is most certainly preferable to cure.
Reassuring capability
With continuous UPS remote monitoring technology, it is possible to anticipate problems and initiate interventions before they take effect. Via expert web monitoring of Socomec’s UPS performance, predictive, preventive and corrective maintenance services can be deployed – and anomalies can be detected and possible malfunctions averted.
Colin Dean comments; “We are all adapting to change every single day – but at Socomec, we invest and work hard to help our customers manage the effects of that change more easily and efficiently, by innovating for tomorrow’s world. We are taking the most relevant and powerful aspects of new technology to design and develop products and services that not only delivery the best possible performance today, but have real agility engineered-in.”
By Matteo Mezzanotte, Communications & PR, Submer
Smart-Datacentres Wanted
During an inspiring TED conference from December 2017, Aaron Hesse, Datacentre Design Engineer at AWS, explained how the energy sector of our global economy was changing faster than almost any other sector and how smart-buildings were becoming a reality, not just some fancy ideas in a Ray Bradbury’s book scenario.
It’s two years later, we still don’t know exactly how many more smart-buildings have been built or are being built. In the datacentre industry, we can quite confidently say that we are some way off from seeing smart-datacentres popping up around the world. Nevertheless, things are slowly changing. Some encouraging initiatives are taking place indicating that creating environmentally sustainable datacentres is not just some utopia or the dream of a few. Datacentres can actually change from a feared source of pollution, into energy contributors to surrounding communities. An example of this is in the case of EcoDatacentre as presented by Jonathan Evans.
EcoDatacentre (Source: https://newatlas.com/first-climate-positive-data-center/36312/)
Big players (such as Google, Facebook, Apple, etc.) are actively looking for new ways to make the datacentre industry a sustainable one, pushing the envelope of green innovation. Though it must be said that the datacentre ecosystem comprises of many smaller providers who, clearly, struggle to match the commitment to renewable energy and eco-friendly procedures, and this is not only because of lack of available resources.
The not so Green Datacentre
The global datacentre market is expected to reach revenues of about $174 billion by 2023, according to experts’ forecast. The fast-paced growth of Deep Learning, Machine Learning, IoT, Smart City, IA, and blockchain (just to name a few of the trends that are powering the undergoing digital transformation) is responsible for the rapid expansion witnessed by datacentres and HPC. These new trends require the processing of large quantities of data, that translates into a necessity for greater computational capacity. Resulting in greater consumption of energy, which obviously is what makes datacentres a not so green industry, to say the least.
Just like the Industrial Revolution brought economic growth, imposing though a heavy toll on the environment, the Digital Revolution we are living today has radically improved our lives, but with dramatic consequences on the environment. Even though datacentres do not spew out black smoke or grind greasy cogs, the social and environmental impact of the datacentre industry tends to be unnoticed or underestimated.
Datacentres and cloud providers consume 6% of the global electricity (more than India) and generate 4% of the global CO2 emissions (more than 2 times commercial air travel). By 2025, it is estimated to consume 20% of the global electricity.
With these figures in mind, datacentres need to rethink their strategy, to become smarter, more efficient and sustainable (the fourth most pressing concern of today datacentres along with energy efficiency, operating costs and security).
ICT electricity demand (Source: https://www.nature.com/articles/d41586-018-06610-y)
The Need to Change
How has the datacentre industry reacted to this problem?
So far, we’ve seen different attempts to minimise the impact on the environment by limiting the electricity consumption or by finding ways to use natural resources as a cooling system. Last year, Microsoft launched the Project Natick, where an eco-friendly datacentre was lowered into the sea of the Orkney Islands.
The Project Natick (Source: https://news.microsoft.com/features/under-the-sea-microsoft-tests-a-datacentre-thats-quick-to-deploy-could-provide-internet-connectivity-for-years/)
In recent years, many companies have started to look at cold-climate regions as an ideal setting for building their datacentres. The Nordics are likely to gain market share thanks to some key advantages such as: abundant renewable energy, carbon neutrality, reliable power supply, low energy prices, political stability and faster time-to-market primarily due to ease of doing business.
However, moving datacentres to the Nordics might be an option not for everyone. For example, there are companies that need to have their data close to their business and customers, or there are those that cannot consider renewable energy as a first choice due to the nature of their business. Latency problems might arise when a datacentre is far from where it is needed. Finally, there are also those who are concerned by the impact on the environment and energy consumption provoked by a potentially massive migration of datacentres in those relatively contaminated (or almost uncontaminated) areas.
A Future with Power-Efficient Datacentres
One of the biggest mistakes when pursuing ways to create a sustainable datacentre is just thinking about the “outside”: the location, the kind of resources it uses, the climate, the temperature, etc. It is actually time to start considering that energy efficiency and, consequently, sustainability is a challenge that must be tackled from the outside and also from the inside. By rethinking the design of every single element of a datacentre: from the building itself, until the smallest component of a server, as explained by Rabih Bashroush.
In a recent webinar organised by Submer Immersion Cooling, John Laban, from Open Compute Project, explained this concept, focusing on how to achieve energy efficiency in a datacentre by identifying all points of electricity waste in the delivery of power to datacentres and HPC.
Webinar Slidedeck (Source: https://submer.com/webinar-the-future-of-power-efficient-datacentres)
Another aspect that dramatically affects the efficiency and sustainability of a datacentre is cooling. The cooling process accounts for 40 percent of all power consumed by datacentres, finding the correct cooling strategy is a top priority for operators.
There are different methods of cooling for datacentres:
The Submer’s Take
In this scenario, Submer does its part. How?
Submer Immersion Cooling is changing how datacentres are being built from the ground up, to be as efficient as possible and to have little or positive impact on the environment around them.
The Immersion Cooling solutions designed by Submer, have been conceived to make datacentres, HPC and hyperscalers smarter, helping them to drastically limit the use of energy, their footprint and their consumption of precious resources such as water.
Not only that. Starting from February, we’ve launched a new series of webinars and blog articles around HPC and datacentres to share knowledge and raise awareness on the role of datacentres and HPC in our society. A sort of speakers' corners, where our guests can freely talk about their experience and share their point of view on different topics: from datacentre design, to Immersion Cooling integration, from the environmental and social impact of datacentres to best practices to limit the electricity waste, etc.
The SmartPods Showroom at Submer’s offices
Today's LTE and 4G networks have been playing an important role in supporting mobile broadband services (e.g., video conferencing, high-definition content streaming, etc.) across millions of smart devices, such as smartphones, laptops, tablets and Internet of Things (IoT) devices. The number of connected devices is on the rise, growing 15 percent or more year-over-year and projected to be 28.5 billion devices by 2022 according to Cisco VNI forecast.
By Adrian Taylor, Regional Vice President Of Sales, A10 Networks.Mobile service providers have been challenged to support such a high growth of connected devices and their corresponding increases in network traffic. Adding networking nodes to scale-out capacity is a relatively easy change. Meanwhile, it's essential for service providers to keep offering innovative value-added services to differentiate service experience and monetise new services. These services including parental control, URL filtering, content protection and endpoint device protection from malware and ID theft, to name a few.
Service providers, however, are now facing new challenges of operational complexity and extra network latency coming from those services. Such challenges will become even more significant when it comes to 5G, as this will drive even more rapid proliferation of mobile and the IoT devices. It will be critical to minimise latency to ensure there are no interruptions to emerging mission-critical services that are expected to dramatically increase with 5G networks.
Gi-LAN Network Overview
In a mobile network, there are two segments between the radio network and the Internet: the evolved packet core (EPC) and the Gi/SGi-LAN. The EPC is a packet-based mobile core running both voice and data on 4G/ LTE networks. The Gi-LAN is the network where service providers typically provide various homegrown and value-added services using unique capabilities through a combination of IP-based service functions, such as firewall, carrier-grade NAT (CGNAT), deep packet inspection (DPI), policy control, traffic and content optimisation. And these services are generally provided by a wide variety of vendors. Service providers need to steer the traffic and direct it to specific service functions, which may be chained, only when necessary, in order to meet specific policy enforcement and service-level agreements for each subscriber.
The Gi-LAN network is an essential segment that enables enhanced security and value-added service offerings to differentiate and monetise services. Therefore, it's crucial to have an efficient Gi-LAN architecture to deliver a high-quality service experience.
Challenges in Gi-LAN Segment
In the today's 4G/ LTE world, a typical mobile service provider has an ADC, a DPI, a CGNAT and a firewall device as part of Gi-LAN service components. They are mainly deployed as independent network functions on dedicated physical devices from a wide range of vendors. This makes Gi-LAN complex and inflexible from operational and management perspective. Thus, this type of architecture, as known as monolithic architecture, is reaching its limits and does not scale to meet the needs of the rising data traffic in 4G and 4G+ architectures. This will continue to be an issue in 5G infrastructure deployments. The two most serious issues are:
Latency is becoming a significant concern since lower latency is required by online gaming and video streaming services even today. With the transition to 5G, ultra-reliable low-latency connectivity targets latencies of less than 1ms for use cases, such as real-time interactive AR/ VR, tactile Internet, industrial automation, mission/life-critical service like remote surgery, self-driving cars and many more. The architecture with individual service functions on different hardware has a major impact on this promise of lower latency. Multiple service functions are usually chained and every hop the data packet traversing between service functions adds additional latency, causing overall service degradation.
The management overhead of each solution independently is also a burden. The network operator must invest in monitoring, management and deployment services for all devices from various vendors individually, resulting in large operational expenses.
Solution – Consolidating Service Functions in Gi-LAN
In order to overcome these issues, there are a few approaches you can take. From architecture perspective, Service-Based Architecture (SBA) or microservices architecture will address operational concerns since leveraging such architecture leads to higher flexibility and automation and significant cost reduction. However, it less likely addresses the network latency concern because each service function, regardless of VNF or microservice, still contributes in the overall latency as far as they are deployed as individual VM or microservice.
So, what if multiple service functions are consolidated into one instance? For example, CGNAT and Gi firewall are fundamental components in the mobile network, and some subscribers may choose to use additional services such as DPI, URL filtering. Such consolidation is feasible only if the product/ solution supports flexible traffic steering and service chaining capabilities along with those service functions. By consolidating Gi-LAN service functions into one instance/ appliance, it helps drastically reduce the extra latency and simplify network design and operation. Such concepts are not new but there aren't many vendors who can provide consolidated Gi-LAN service functions at scale.
Therefore, when building an efficient Gi-LAN network, service providers need to consider a solution that can offer,
According to Forrester, 84 per cent of online adults in the UK, France, Germany, Italy and Spain use smartphones. Retailers should take note of this statistic, because it’s likely that these always-online consumers will increasingly utilise devices throughout their shopping journeys to research, browse and buy.
By Richard Willis, RVP Consulting, Aptos.
However, this isn’t the first time we have heard that mobile or m-commerce is on the way up. Over the past few years, it’s been predicted that ‘this is the year for mobile’ – but it has never really come true. The good news is that Forrester states that over the next five years, Western European online retail sales will grow at over three times the rate of total retail sales, driven in part by the boom of mobile commerce.
At the risk of joining in with the crystal ball gazing, 2019 may mark a watershed in mobile retail – but only if retailers can seize the opportunity that is now on offer.
The mobile opportunity
No one claims that mobile will surpass other retail channels in terms of conversions in the foreseeable future. In-store, where consumers can examine items and talk to knowledgeable sales assistants, still provides a unique experience and should never be compromised; meanwhile, traditional online retail presents the shopper with enormous choice on an easily viewed browser.
But mobile does have a key role to play in shoppers’ experience. Whilst our recent research showed 11 per cent of UK shoppers planned to use mobile as their preferred channel in the run-up to Christmas 2018, it also revealed that of those using mobile, almost 40 per cent were using it to look for inspiration for gifts rather than make the actual purchase.
We also found that just under a third of shoppers planned to use mobiles to check online prices while in-store (the old “showrooming” phenomenon). This insight is supported by figures from Deloitte’s annual UK mobile consumer survey, which reveals the rising influence of smartphones on retail sales – including how 84 percent of millennials claim to use their phones for shopping assistance while in a store.
How to keep shoppers coming back
It’s clear that mobile is a large and increasingly important part of the customer experience journey. The challenge for retailers – and their great opportunity – is ensuring that the mobile experience is easy to navigate and consistently fantastic, whether shoppers are making purchases, looking for gift inspiration or comparing prices.
Retailers might think that the best way to turn browsing into sales is by offering something that others don’t – and to some degree they’re right. But getting the basics correct counts for much more than a gimmick.
According to Forrester, smartphone-savvy consumers have high expectations for mobile experiences, with 61 per cent of shoppers more likely to return to a website if it is mobile-friendly.
What steps can retailers take to ensure their mobile sites keep shoppers coming back?
For starters, m-commerce sites should be optimised for every device and mobile OS. Differences in screen size and resolution, button placement, or operating system can have a huge effect on the mobile experience. Retailers often claim that they optimise their websites for every device, but do they take into account the small factors which can have big consequences on the path to purchase?
One example is placing the checkout or “Buy Now” button in the space where push notifications usually appear. This could lead to the user becoming distracted or accidentally clicking out of the purchase – perhaps a small problem but one which, multiplied by thousands of users, could severely affect sales.
Another key consideration is designing websites to be mobile-first. Many websites carry a large amount of content that is right for bigger screens, such as long blogs, videos or interactive content. Mobile-first sites, on the other hand, need to be crisp, clear, uncluttered and easy to navigate, with visuals specifically designed for mobile devices.
Finally, we would urge retailers to think about devices holistically. M-commerce is about much more than buying something through your device’s browser. An effective strategy should embrace loyalty apps with a range of functions that optimises navigability, provides a variety of services and boosts loyalty. This could include self-service options such as checking availability and setting up click-and-collect delivery options, or providing product reviews, social integration and single-click ordering.
An m-commerce approach is much more than simply venturing into another sales channel. It’s opening the door to a new generation. By optimising their mobile offering now, retailers have a unique opportunity to connect with always-online consumers, who were practically born with a smartphone in their hand. Forrester’s claims are bold, but this could be the year that m-commerce finally takes off.
Artificial Intelligence (AI) and 5G are red hot topics today. However, despite all the hype and discussion around connected and intelligent applications, there is still a lot to be said about how these two technologies will leverage each other to deliver the networks and associated use cases of the future.
By Brian Lavallée, Submarine Networking Solutions Expert, at Ciena.
When compared to existing 4G LTE networks, 5G will offer unprecedented speeds, much lower latency, higher connectivity, and higher availability to power smart cities, connected and autonomous vehicles, AR/VR streaming, and numerous other advanced applications.
AI, fuelled by analytics, has the potential to help improve the efficiency of network communications and maintenance, while safeguarding network uptime through automated policy-based decision making through streaming real-time network data. It also has the potential to identify and prevent potential service disruptions, detect suspicious network behaviour for increased security, and proactively improve overall network reliability. This combination represents significant value for operators preparing to adopt emerging technologies, such as 5G, as they mature.
AI and 5G have the potential to help solve the challenges that impede network evolution. Together they could reduce costs related to maintenance and network downtime and improve decision-making on bandwidth allocation and network repairs to ensure high-availability and performance for users across the globe.
AI: Challenges vs. prospects
For network operators, understanding the value of AI must start with an understanding of AI itself and what it can offer in terms of network management, efficiency, profitability, and security.
AI is a “thinking” machine capable of monitoring network behaviours, identifying potential problem areas, detecting what is “not normal”, and taking corrective action. Harnessing the power of data – analytics, AI can introduce greater efficiency to the workflow by removing the requirement for human operators to make basic decisions on migration and path selection. This will optimise network functionality and enable bandwidth on demand.
Despite the apparent benefits of ‘automated networks’, it’s clear that operators will not surrender complete control of their networks– instead opting for adaptive networking practices that harness the power and efficiency of data-driven AI and combine it with the invaluable experience of their engineers.
AI should be implemented in controlled stages, with rigorous testing to ensure it has both enough data and the right data to form solid decision-making policies. By developing projects individually, operators can ensure they are satisfied with the results in one area, before scaling up deployments to cover more network functions, thereby reducing the potential for errors ahead of broader adoption.
AI’s role in 5G
It is predicted that there will be one billion users of 5G by 2020 with one in seven mobile connections made via 5G by 2025. And, while 5G is still in the testing phase, the initial rollouts seen in South Korea, the US, and the UK, will be used as proof points for how the technology will enable operators to determine the most cost-effective models upon which to expand to national and regional coverage.
AI won’t just be beneficial to 5G, it will be a necessity. By drawing on the massive influx of data to form superior policies, AI can be used to examine network activity and suggest the appropriate action in the event of service disruptions. In addition, AI will enable self-healing capabilities. Through real-time data analysis, AI will compress decision-making timelines by orders of magnitude, repairing or even reconstructing the network in a matter of minutes to minimise disruptions from damaged cables or attempted network intrusions. The potential savings through the prevention of revenue loss will be a crucial factor in ensuring cost-effective 5G services as operators evolve their networks over the next decade.
It is also likely that the increase in device traffic on 5G networks will usher in a greater threat to operators from Distributed Denial-of-Service (DDoS) attacks, as hackers will be able to launch attacks of unprecedented size. AI will offer increased security through proactive network monitoring, using historical data to spot anomalies on network services and signs of intruder connections, and taking action to protect the network and preserve functionality.
Therefore, it makes sense for operators to start integrating AI capabilities into their existing 4G network infrastructure now, where it makes sense, to provide the additional functionality that will be a must for managing independent 5G networks in future.
AI at 5G speed
There is no doubt AI and 5G will soon take on a significant role in mission critical telecommunications services. As operators define the most effective models and connect individual deployments to create broader networks, AI-enabled 5G will evolve to uncover exciting new possibilities and use cases. Together these technologies will drive greater business profitability, innovation and enhanced user experiences.
For operators, the time to lay the groundwork for 5G connectivity is now. A big part of that journey will involve developing AI capabilities that go beyond today’s network functionality and embrace the world of high-bandwidth, always-on connectivity, to give rise to intelligent infrastructure and widespread connected device networks across the globe.
While MPLS still dominates the WAN market, no organisation can afford to ignore the speed with which SD-WAN (Software-defined WAN) is gaining traction or the scale of innovation globally. With Gartner currently tracking 60 SD-WAN vendors – a six fold increase between 2017-2018 – WAN decision making is fast evolving from ‘MPLS versus SD-WAN’ to ‘Which SD-WAN?’.
With the market on the cusp of widespread SD-WAN adoption, organisations will need to determine how and why they will deploy SD-WAN. One of the challenges facing service providers, multi-site businesses and IT departments is the ongoing role of MPLS technology, recognising that legacy WAN contracts may still in place. The opportunity to implement a hybrid WAN solution model incorporating both MPLS and SD-WAN provides the opportunity for organisations to harness the best attributes of both technologies and begin a phased migration process from MPLS to SD-WAN now, as Nick Sacke, Head IoT and Products, Comms365 explains.
Why SD-WAN? Improving the User Experience on diverse connectivity
In a cloud dominated user environment, the quality and reliability of the WAN to deliver application performance has become an essential component of IT infrastructure design. The current consensus is that traditional MPLS networks struggle with the volume of Internet-based cloud traffic, the diversity of routing locations (applications are delivered from multiple clouds, not a single datacentre), and ensuring application performance across both MPLS and Internet-bearing services. The impact of this is an increase in the number of customers evaluating and requesting SD-WAN solutions from their service providers. Indeed, it would be hard to find an organisation today taking the decision to go for traditional MPLS without considering the SD-WAN alternative. Given the increasing commitment to improving user experience and enhancing the management of application performance, it is the ease with which the benefits of the SD-WAN technology can be utilised – from agility and rapid change to multi-linked failover and application prioritisation – that should be an essential consideration.
And as such, the way in which organisations decide to deploy SD-WAN will be key. Right now in the UK it is the Managed SD-WAN service model that dominates the market, as experienced MSPs can rapidly deploy the solution with demonstrable high performance and multiple built-in capabilities from day one, meaning that organisations can reap the benefits almost immediately. Replicating the familiar outsourced services used by many organisations to achieve WAN connectivity, MSPs are rapidly adding SD-WAN technologies to existing managed services portfolios. The services include every aspect of the SD-WAN solution, from hardware to software, networking and connectivity, all delivered within the standard Service Level Agreement (SLA) model.
In contrast to the Managed Service where every aspect of the service and all changes to the parameters of that service are undertaken by the MSP, the alternative deployment model is SD-WAN as a Service. This approach, which has yet to become widely available in the UK, is gaining significant interest in North America. This software only model provides a multi-tenanted infrastructure set up that enables companies to rapidly connect sites while also providing the IT Manager with the tools to monitor, manage and change service parameters as required.
A Hybrid Approach to SD-WAN
SD-WAN has the potential to be a replacement for MPLS, but this is not necessarily the right option for every organisation. For a UK multi-site operation, many of the sites could still be in contract with months or years still remaining – so a replacement to SD-WAN would be commercially challenging. An alternative approach to consider is to combine MPLS and SD-WAN together as a hybrid approach in order to augment capacity, enable rapid expansion, increase control to IT managers and not having to increase the overall MPLS contract term for the addition of a few sites – without a huge financial outlay. By adopting such an approach, the case for complete replacement of MPLS can then be considered longer term without penalty.
For an IT Manager used to outsourcing WAN connectivity, the evolution from MPLS to Managed SD-WAN should be culturally straightforward: the model is the same, the difference is simply the underpinning technology and the benefits associated with the software defined model. Differentiating between MSPs will be based on issues such as access to a diversity of connectivity options and quality of service – for example, does the MSP support the need for agility and flexibility, as well as future proofing, by offering a network agnostic SD-WAN?
SLAs will be key and, over time, the managed SD-WAN services on offer will undoubtedly become ever more sophisticated as MSPs look to exploit the intelligence within the SD-WAN technology. Certainly traditional response times are no longer good enough in application centric organisations, so it is important to determine whether or not an MSP is leveraging the software only nature of SD-WANs to overhaul its own support operation. With a software defined solution, it is possible to scale support three, four, even five-fold, which should enable far faster response to business demands.
SLAs are already evolving from a response timeline to a respond and fix time. With the addition of analytics and artificial intelligence (AI), Managed SD-WANs should include increasing levels of automation, such as the use of application aware routing to enhance the performance within increasingly application centric organisations.
Maturity & Confidence
Many of these benefits can, of course, be achieved immediately in-house with the as a Service model, if organisations have the resources and confidence to manage the SD-WAN network. This option provides the chance to maximise the value of the SD-WAN technology, such as immediately reallocating application resources as required, with no need to wait for the MSP to respond.
However, given the current level of market maturity, few organisations in the UK have yet to achieve the required level of confidence or technical skills. It is, however, likely to gain interest as SD-WAN maturity and confidence improves. Indeed, as organisations become ever more application centric given the huge increase in cloud based applications, IT departments are looking to dedicate ever more resource to monitoring the application performance which is so critical to business operations.
As such, these skills will be increasingly embedded within IT teams which means, looking ahead, the SD-WAN as a Service model is likely to become increasingly popular, dovetailing neatly into the next generation of application aware monitoring and management tools that will be key to improving end user experience.
Cost Consideration
In terms of service model, the cost differential between a Managed MPLS and a Managed SD-WAN service are negligible – although a number of the SD-WAN technologies being developed are significantly more expensive than MPLS alternatives. These are incredibly feature rich solutions, and companies will need to take a robust approach to assessment to determine whether any of the expensive add-ons are really required.
The SD-WAN as a Service model is significantly cheaper – but it will require additional internal resource, so the operational cost comparison will depend upon the existing IT skill base and need to add heads to manage the network. Finding and recruiting the required skills internally could result in a higher cost than the managed service model.
With either approach, SD-WAN delivers cost and performance benefits. Certainly for those used to the Managed Service approach, the better speed of response and automation enabled by SD-WAN technology should enable IT Managers to reallocate internal resource previously dedicated to managing application problems. Where the slow response delivered by MPLS MSP services could take hours to determine whether application performance was caused by the WAN, LAN or device, the intelligence delivered by an SD-WAN makes such diagnosis – and repair – far quicker to achieve, reducing the resources required. SD-WAN technology also makes it far easier to manage performance: by combining traffic information with analytics organisations can spot trends and determine pinch points within the network, insight that can be used to immediately remediate and deliver an ever better quality of experience for the user.
Conclusion
As the market reaches the tipping point and SD-WAN becomes the technology of choice, there is no doubt that a Managed Service option should be the easiest approach: culturally familiar but with the added benefits of better performance and radically improved response. Plus, with the number of MSPs now offering Managed SD-WAN Services, companies have a far broader choice. For multi-site organisations, in particular, the option of a hybrid SD-WAN and MPLS approach offers flexibility for businesses to start migrating to SD-WAN now, without being constricted by lengthy MPLS contracts.
Given the increasingly application centric nature of most businesses, there is a strong case for IT teams to continue to reallocate resources towards real-time application performance monitoring and management. As such, not only will expectations of MSP responsiveness rise steeply but it will also be important to consider a potential migration to the SD-WAN as a Service model in the future.
A WAN investment today needs to be future proofed for at least ten years – so MSPs must not only support both models but also a hybrid of MPLS and SD-WAN sites so that organisations can begin to embrace the benefits that SD-WAN can offer. Furthermore, by supporting migration between the two models, even embracing a blended approach that includes opening up levels of permission and authorities to the IT Manager to provide better visibility and control over the network, will offer businesses rapid access to the world of SD-WAN.
IT automation has been on the rise massively in the past few years, offering some big benefits in efficiency and productivity. Brett Cheloff, VP at ConnectWise Automate discusses the major advantages and future of IT automation.
When thinking about IT automation, some of the biggest benefits that come to mind are efficiency and increased productivity. Automation can certainly help companies do more with less by introducing smart workflows and removing redundant tasks. Automation also increases the visibility of what’s happening in daily operations, freeing up even more time to focus on critical business matters. Here are five ways businesses can benefit from IT automation.
1. Improved organisation
Automation tools distribute information seamlessly. For instance, automatically creating a quote for a new project and being able to invoice it from the same system, all of the information regarding the project is in the same place. It stops the need to go looking for the information across multiple systems.
Automation ensures that the information is automatically sent where it’s needed it, keeping the information current, and preventing teams from spending copious amounts of time looking for it.
2. Reduced time spent on redundant tasks
One of the biggest benefits to IT automation is the amount of time teams save on manual, repeatable tasks. Leveraging automation helps IT professionals reduce the time spent on creating tickets and configuring applications, which adds up over time. Based on estimates, it takes 5 to 7 minutes for tech teams to open up new tickets due to manual steps like assigning companies and contact information, finding and adding configurations, and more.
With automatic ticket routing, time spent on tickets can be reduced to just 30 seconds. For someone that works on 20 tickets a day, that results in 90 minutes a day, or 7.5 hours a week, in additional productivity.
3. Well-established processes
The best way to leverage the most benefits from IT automation is to ensure workflows and processes are created that are set up in advance. Establishing these workflows will ensure a set of standards are created, that everyone on the team can follow without having to do additional work. Once these workflow rules are established, these processes can help establish consistency and efficiency within operations – and ensure a consistent experience is delivered to customers, regardless of which teammate handles their tickets.
Furthermore, the documented, repeatable processes can help businesses scale by making it easier to accomplish more in less time. Teams can focus on providing excellent customer service and doing a great job when they don’t need to waste time thinking about the process itself.
4. Multi-department visibility
Maintaining separate spreadsheets, accounts, and processes makes it difficult to really see how well a business is doing. To see how many projects are completed a day or how quickly projects are delivered, professionals may need to gather information about each employee’s performance to view the company as a whole.
Automation tools increase visibility into business operations by centralising data in a way that makes it easy to figure out holistically how a company performs, in addition to the performance of each individual team member. It’s even possible to isolate the performance of one department with automation.
5. Increased Accountability
With so many different systems in place, it can be difficult to know exactly what is happening at every moment. For instance, if an employee wanted to delete tasks they didn’t want to do, businesses need processes in place to know this went on. What if deleting something was an accident? How would you know something was accidentally deleted and have the opportunity to get the information back?
Automation reduces human errors by providing a digital trail for the entire operation in one place. It provides increased accountability for everybody’s actions across different systems, so issues like these aren’t a problem.
Automation is an easy way to develop the increased accountability, visibility, and centralised processes required for businesses to grow and serve more clients. Technology that helps manage workflows, automate redundant tasks, provide consistent experience to all customers will help businesses provide superior levels of service – and help improve your bottom line.
Designing and deploying a new or modernised data centre is a rewarding endeavour; both for the engineers and architects, and also for the businesses that reap the benefits of agility, scalability, and performance that come along with it.
By JR Rivers, Co-founder and CTO, Cumulus Networks.
In order to successfully transform the network, businesses must be prepared to ask challenging questions that drive conversations around open networking, automation, modularity, scalability, segmentation and re-usability. Before moving forward, it is essential that organisations consider the following list of business and technical guiding principles:
1. The network architecture should use standards-based protocols and services: Over the past few years, adoption of open source technology has increased significantly, as more organisations discover its considerable advantages which extend far beyond low costs. While proprietary protocols and closed ecosystems require highly specialised engineers, limit inter-operability, and force organisations into particular designs that are difficult to escape, standards-based protocols promote interoperability, competition and innovation.
2. The network should be serviceable without downtime: It goes without saying that fault tolerance is a must. Service outages are always a risk and can occur for any reason, to any type or organisation, leading to financial and reputational damage. For example, a 2017 AWS outage cost publicly traded companies $150 million dollars, and the recent Google cloud outage generated negative headlines around the world.
To prevent outages, all compute nodes must be dual-connected to redundant upstream Leaf switches. Leaf switches should have redundant peer-link connections between each other, and to each Spine switch. Equal-cost multi-pathing ensures that all paths are active and forwarding. Inserting or removing a Leaf or Spine switch should not affect production traffic.
3. The network architecture should promote automation: Manual configuration changes are time-consuming and prone to human error. When designing or monitoring a network, it’s important to ensure that it’s running as intended and adheres to set network and security policies. Automating tasks can make the network self-healing, more consumable, and easier to audit. Familiar Linux APIs allows DevOps engineers to integrate the network into automation engines without the friction of dealing with numerous, vendor-specific APIs. Having the same network operating system (NOS) on each device, regardless of the underlying hardware, opens the door for simplified network automation.
4. The network should be consumable: Tied into automation is the concept of consumable self-service networks. Whether the data centre is private and serving a single organisation, or built for a busy IaaS platform, having the capability to empower administrators or customers with self-deployable networks should be a key consideration with new network designs. Creating networks in the public cloud is a fundamental feature everyone expects.
Customers should have the capability to deploy segmented networks on the fly, without the intervention of network engineers. A Linux NOS is ideal for orchestration solutions, due to native Linux modules and APIs. Deployments that harness EVPN with automation facilitate the deployment of new networks while simultaneously enabling customers to build their own on the fly.
5. Physical boundaries should not restrict segmentation capabilities: Modular portability is critical when thinking about network design. Organisations can use EVPN to compartmentalise and segment tenant traffic across the data centre, providing an open and flexible architecture irrespective of physical boundaries, transporting network segments anywhere in the data centre or across data centres.
6. The network must be scalable: A Leaf-Spine Clos architecture is ideal for data centres; with equal-cost multipathing of 128 links, Leaf-Spine pods can become massive. Additional pods can be added to grow horizontally, or new tiers to grow vertically, interconnecting indefinite numbers of pods. EVPN scales with the physical topology, providing the ultimate modularity for scale. If port-density or port-speeds in specific areas become insufficient, a disaggregated model allows data centre admins to swap hardware modularly, automating the NOS and network provisioning with ONIE, proving flexibility at the micro and macro scale.
7. Network changes should be verifiably testable before implementation: Downtime and SLA violations can cost organisations significant dollars in the form of refunds or reputation. Organisations can reduce the risk of downtime by fully simulating network changes and upgrades before flipping the switch and making them live, assuring that simulated tested network changes will be successful on systems in production.
Modern IT demands automation, scalability and agility. The implications for businesses now are not just technological support but economical as well. An inflexible network becomes expensive to scale at the speed of customer expectations and business innovation. Business innovation puts pressure on data centres to offer extensive automation of the entire network life cycle, from provisioning and deployment to day-to-day management and upgrades.
When designing their next data centre network, organisations should carry the above guiding principles with them from project inception through to network deployment. While the list is far from all-encompassing, these ideas will help generate specific results for a highly effective and agile data centre, built to scale, and designed to lead.
The global threat landscape has evolved dramatically over the years. The type, origin and targets of threats change on an almost daily basis and the pace at which attacks happen is rapid and getting quicker. This is because they are being carried out by a highly motivated group of actors that are at the cutting edge of technology - and prepared to embrace every possible threat vector in order to achieve their goals.
By James Barrett, senior director EMEA, Endace.
This has created a fragmented security market. Historically, innovative and very effective software solutions were developed to meet a particular attack vector. However, over time advances in technology have created more and more threat options that older defences are not designed to fight, which means organisations are being overwhelmed with a volume and variety of threats.
Why big isn't always better
Looking at a large enterprise - a typical target for cyber criminals - they tend to have many tools for defence. This would suggest they have all the bases covered in the event of an attack. However, the volume of security tools is both a blessing and a curse.
While these tools help spot and mitigate potential attacks, the flipside is that they create a monumental amount of work for in-house analysts and security teams to deal with. Aside from the time required to read, understand and act on information, dashboards need to be managed with the resulting information then digested and processed. Similarly, all these tools need to be upgraded and understood, which involves training. Teams are spending more time trying to wrangle the tools than actually managing potential threats.
One of the most time consuming, but most important, roles of security analysts is triaging the data - quickly understanding what needs prioritising versus what doesn’t. However, in order to do this effectively internal teams need to ensure they can get data of all types, investigate it from all angles, and create an output suitable for action. This relies on deploying solutions that can aggregate and correlate data from a range of different tools.
The need for best in breed
Organisations are continually looking for better ways to tie data together in order to get a more holistic view of what’s happening in their universe, but one of the the things we hear regularly is that there is no easy way to do this. If organisations deploy single-stack, integrated solutions from a single vendor, they might get better integration between various functions, but lose out by not being able to deploy best-of-breed solutions for those individual functions. So they’re stuck between a rock and a hard place whichever route they choose.
A chief information security officer needs the ability to pivot between interfaces and be able to build what is most suitable for their organisation. Similarly, they need to make the most of their analysts time and not waste it moving from platform to platform, training on tools or trying to navigate internal processes in finding ways to shore up their defences in the best way possible. This means tools need to be able to talk to each other.
A common platform means less complexity in the hardware deployment – which also reduces cost – because an organisation can deploy a common hardware layer and then simply change functions in software. It also enables a common standard investigation workflow process, regardless of the security tool being used - the tool raises an alert, the analyst can go from the alert to the packets to see what happened and respond appropriately. All the tools see exactly the same source of network traffic, and analysts can reference that same common source of truth – which provides a common reference point.
Winds of change
Thankfully, more and more vendors are recognising that a heterogenous, multi-vendor environment is a reality, and that enabling integration between its solutions and that of other vendors is necessary - and something that customers are asking for. Many products - like SIEMs, or SOAR (Security Orchestration, Automation and Response) platforms - or even AI tools like Darktrace's and BluVector's - are designed to work with a wide range of other vendor solutions because they need to in order to be able to automate response. And firewall and networking vendors like Cisco, Palo Alto Networks and Juniper are building in the functionality to enable their systems to receive "instructions" from security tools, for example, isolating a host that a tool has identified is possibly compromised from other hosts on the network.
We are also seeing a commonality of framework coming into certain areas, building on the excellent Common Vulnerabilities and Exposures (sometimes Exploits) database to identify and track vulnerabilities. There are others too, both open source and commercial. Examples include; The National Vulnerabilities database, The Vulnerabilities database, The Exploit database and The Vulnerability Notes Database. Advances in data analytics too means there are particular frameworks that can be applied to the same data over and over again to help find solutions and answers to the same problem but from different angles.
Certainly there is a move within individuals to share programmes via the open source community but, openness is not something that is yet top of mind for organisations. It is simply not part of the security strategy. Not yet anyway. The combination of the evolving threat vector, constantly improving of tools and the appreciation that we’re all fighting the same battle against a common enemy, means that the market motivation is there to have joined-up security solutions. It is now incumbent on the vendors to bring this vision to life.
Mobile device management (MDM) is a common requirement for enterprises. Mathivanan Venkatachalam, Vice President, ManageEngine, shares his top tips for shaping a comprehensive MDM strategy.
Remote work policies are becoming increasingly popular as businesses recognise how providing a better work-life balance can result in an overall boost in productivity. Bring your own device (BYOD) culture is also on the rise with businesses allowing employees to use their own preferred personal devices such as smartphones, tablets, and laptops instead of devices supplied by the business.
From a security perspective, this puts business networks at risk. Companies enabling remote work environments and BYOD policies must formulate an effective MDM strategy. Here are the elements that enterprises should consider when approaching MDM:
1. Set a clear objective
Begin by selecting which of the four main device categories each device falls into. These categories are BYOD (user-owned devices); choose your own device (CYOD); corporate-owned, personally-enabled (COPE) devices; and single-use devices. Once the device categories have been defined, it’s essential to set clear objectives on what the business needs to provide to ensure data security is managed effectively.
This can be achieved using a set of questions, including: Which types of devices are permitted? Which employees are eligible to access corporate data from their mobile devices? What level of business access should the enterprise provide from each device? What security policies have to be imposed on each device? And finally, which apps should be provided?
The answers to these questions will help identify basic strategies to allow enterprises to use mobile devices for corporate access.
2. Ensure clear communication
An effective MDM strategy must provide clear communication to end users around what the user will be accessing from their mobile devices and what level of access the user will have on the device. For example, in an enterprise allowing employees to use their personal devices, employees should be given a clear understanding of what data they can access from their mobile devices and whether their personal data will be accessible to the company.
Communicating to employees what changes are afoot and what access and restrictions they can expect from their devices will help avoid an influx of help desk tickets when the changes take place on their devices.
3. Manage data by device
The main purpose for an MDM strategy is to identify or secure the data on devices. There are three types of data on mobile devices: data at rest, data in transit, and data in use. Each of these must be managed in their own way.
When it comes to data at rest, it’s important to encrypt the mobile device. Unauthorized data transfer should be restricted, whether it's through USB, Wi-Fi, or Bluetooth. If the device is stolen, the sensitive data on the device should be wiped.
Data in transit requires routing all network traffic to a common, secure proxy or VPN channel. If the enterprise suspects using public Wi-Fi is not secure enough for users to accessing data through, that type of Wi-Fi connection can be prohibited. This way, organisations can ensure devices only use secure Wi-Fi connections while avoiding public ones.
When it comes to data in use, enterprises should blacklist certain applications from devices to prevent access to malicious websites. Data sharing between managed and unmanaged apps and backing up to third-party cloud services or other third-party applications should be restricted.
Secure, sensitive documents should also be managed. Sensitive data can be distributed to devices while ensuring data is only accessible from a secure, managed app. For example, if an enterprise allows certain devices to access email from Exchange Server, they should ensure that devices can only access the data using a managed application. If the device is not managed, access to email from Exchange Server should automatically be blocked.
4. Implement one solution to manage all devices
Implementing and managing an effective MDM strategy can be made easier by investing in a solution that enables device management anywhere at any time. It should include the capability to scan devices remotely, install agents, and monitor for and install operating system updates as well as other software updates. The solution should also have the capability to manage prohibited software and add or remove devices from the business network.
By following these steps and implementing a singular device management solution, enterprises will benefit from a safe, secure, and reliable MDM strategy that works around the clock and requires minimal input from the IT team.
As the digital skills gap increases, the demand for developer talent is reaching a record high.
By Nigel Abbott, Director at GitHub.
European businesses from all industries are innovating to keep up with digital transformation. Organisations are stuck in a continuous cycle, where customers are asked for instant feedback of their experience, for the businesses to then collect this data and improve their services in shorter amounts of time. However, for this relationship to be truly beneficial, a company must be able to change fast in response to increasing consumer demands. It is unsurprising, then, that the developers who continuously innovate businesses’ digital offerings have become the key players in digital transformation across Europe.
The catalyst to digital transformation
Digital transformation feeds into all aspects of a business from logistics and finance, through to customer experience. No matter the business function, software development provides a foundation for businesses to seize a range of opportunities to digitise their offering, from creating market-leading apps through to streamlining their operations. As the demand for innovative software grows, businesses must optimise available resources to ensure fast development.
Developers are using Open Source Software (OSS) models to develop software because of its efficiency in joining disparate, geographically separate teams asynchronously; enabling them to turn ideas into solutions in a quick time frame. In turn, software development is influencing and redefining industries. Take IKEA for example, just last year the company announced it was bringing a huge focus to its digital experience following demands for catering to online consumers during a period of rapid growth in online shopping. It even launched projects such as co-creation, a digital platform in which consumers can make suggestions in the development process of products. This ability to evolve the business model, understand and interact with customer feedback and pivot fast is helping businesses like IKEA retain loyalty and customer spend.
Keeping pace with your customer
There is a huge disparity when it comes to digital innovation within businesses. One the one side, there are disruptors leading transformation in their industries while traditional businesses are grappling to keep pace and remain competitive. Look at Spotify; set up in Sweden in 2008, the business has since grown into the most popular online music streaming service in the world. Spotify’s culture is built around the consumer’s experience of discovery where listeners have a collective impact in shaping the platform. The developer teams are the key driving force behind this; constantly working to innovate the service, introducing quirks throughout the year such as the annual ‘Wrapped’ campaign, which picks out user listening habits and brings their global consumer base together in conversation. Its success has placed pressure on its competitors to keep up as it continues to dominate the European market.
Software development is ultimately enabling businesses to deliver new products and services focused around the demands of their customers. Digital transformation is disrupting the traditional business value chain, just as the role of developer is constantly changing: new technologies and new languages appear regularly which require training and cultivating creativity. Through using software development, companies can also innovate solutions that are already strong, to further improve the customer experience. Developers have undoubtedly fuelled digital transformation, and lack of a talented developer team means companies will fail to keep up with their competitors.
Rise in demand for specialist developer skills
As the above examples demonstrate, there is a growing market for skilled developers across European businesses. They are the roots of organisations’ efforts to drive innovation, helping them remain relevant in a fast-evolving, digital world. Recent research from Stripe found that the number of software developers employed increased 56 percent on average from the previous year.
However, the same report showed that 42 percent of developers still weren’t confident that their company had enough skilled employees or sufficient engineering resources to react to technology trends in their industry. As the infrastructure of companies primarily becomes programmable software systems, it is important to encourage staff to learn programming languages. This in turn boosts in-house developer talent, whilst appeasing the rapidly increasing demand for OSS. For most large tech companies in the land of free platforms and methodologies, OSS is the accepted method for software engineering because of the extensive benefits it brings to the business, such as faster software development and access to other public codes.
For businesses of any size, in any sector, the key to digital transformation is staying ahead of customer expectations and the competition. It is the strength of the developer team that will make or break a business’ efforts here — and they have rightfully become the heart of digital transformation.
Digital interactions of all guises may benefit from the integrity and resilience of blockchain, but a decade on from when it was first introduced, the vast potential is still to be fully harnessed.
By Maurizio Canton, TIBCO’s CTO EMEA.
Decentralised, tamper-proof and traceable, the technology’s core ability to make data immutable has made it a compelling solution. Most notably for use in transactions where trust and visibility are critical, and a growing maturation has expanded scope beyond the world of crypto currency in finance.
Yet complexity and a lack of standardisation continues to deter developers and thwart mainstream take up. A recent study by Deloitte, lays bears the extent of the inconsistency and highly fragmented nature surrounding the technology’s creation and implementation. It reveals that the 6,500 active blockchain projects currently featured in the cloud-based repository GitHub are written in different coding languages and using multiple platforms and protocols, consensus mechanisms and privacy measures.
Fundamentally, the technology is a complex web to negotiate. It’s an ecosystem that demands a deep understanding from the developer and the ability to overcome challenges including integration within existing legacy systems and the lengthy deployment times in the cloud.
While for some, there is an acceptance that the blockchain’s growth trajectory is long term, for others the seismic shifts within their particular industry means that accelerating blockchain’s commercialisation has become a necessity in order to remain competitive. As such, the obstacles must be overcome now.
Driven by the juggernaut of open banking, financial services is an obvious example. Research by industry analyst CACI predicts that 35 million people globally will be using mobile banking as their preferred choice of platform by 2023. Blockchain technology is therefore set to play a pivotal role in its success as the mechanism for forging the trust between customers and the third-party financial services.
Furthermore, this is the year we will see the provision of services through open banking become the norm, and with it, the subsequent integration of mobile wallets, blockchain, video chat and data analytics with existing offerings viewable across one complete interface.
We see a similar urgency in the gaming sector, a $140 billion global industry driven predominantly by digital micro-transaction economies. Finding solutions that can support these economies at scale with cross-chain interoperability has become the Holy Grail for companies that need to set the foundation for long-term, sustained consumer adoption of blockchain technology.
So, what are they turning to? Well, tools that foster agility by overcoming storage and computing hurdles and negate the requirement for a deep programming knowledge. Notably, a new breed of plug and play iterations are gaining strong traction, and for very good reason.
When rooted in integration and visualisation expertise, these graphical interfaces can sit over the underlying technology to enable the creation of easily written, visualised, tested and audited smart contracts - one of blockchain’s core concepts.
Furthermore, the use of low-level code makes it a more accessible, user-friendly option for those without programming experience. Complex transactions can be visualised, customised and identified and the ability to run on any blockchain or cloud platform boosts flexibility, operational efficiency and time to market.
This enhanced agility becomes apparent when we home in on one specific area - the handling of computation processes that may need to flit between being on and off the blockchain.
While inevitably the bulk of activity will happen on chain – for example, maintaining the security and integrity of transactions processed at high speed on a bank’s network – there are times when it is beneficial for specific processes to be handled off-chain. I am thinking specifically of validating a large data file that may previously operate on a standalone basis before it is let loose on the main network.
Minimising any delay, friction or risk when operating across the two spheres becomes a critical concern that must be addressed if blockchain is to deliver its true value without compromise.
DW talks to Tim Wilkes, Marketing Director, Kohler Uninterruptible Power, about all things UPS – covering the company’s products and services portfolio, some key industry issues and ending with some interesting thoughts on power availability into the future.
1. We must start with the name change – Uninterruptible Power Supplies Ltd has become Kohler Uninterruptible Power (KUP) – what’s the thinking behind this?
Firstly, it’s worth mentioning that UPSL became part of the Kohler Co organisation in 2008, so there has been no ownership change or need to be concerned about procedural or staff changes. Our decision to rebrand as Kohler Uninterruptible Power was driven by Kohler raising its own profile global, through activities like the sponsorship of Manchester United men’s and women’s football teams, as well as our own portfolio expanding from purely UPS systems, into the complementary areas of emergency lighting inverters and generators. It therefore made a lot of sense to rename the business to reflect those developments.
2. In other words, is it just a cosmetic change, or a signal of intent?
The answer to that is a little of both. We have taken the opportunity to freshen our cosmetic appearance, and as you can imagine there has been a lot of work updating logos and signage across the company. However, the reference to uninterruptible power, rather than purely uninterruptible power supplies more accurately reflects our focus on supporting our clients’ solutions, rather than just products.
3. Can you give us some brief background on the company’s journey up to the name change at the beginning of this year?
Certainly. The company was established in 1997, with the goal of building a portfolio of advanced UPS systems, combined with a national support and maintenance infrastructure. In 1999 it introduced the first three-phase, transformerless UPS to the UK and Ireland, and in 2001 there was another first with the introduction of the first transformerless modular 3 phase UPS. The success of these innovations set us on the track that led to our acquisition by Kohler in 2008 and their investment in building the business, subsequently leading to a move to new HQ in 2012. In the interim, we established our own service and sales company in Singapore in 2010 and a couple of years ago acquired our partner in Ireland, Pure Power Systems, who have also now rebranded to Kohler Uninterruptible Power.
4. And how would you define KUP’s UPS USP – what distinguishes the company and its technology in what is quite a crowded market place?
That’s an interesting question and something I as a marketing professional was keen to try and distil when I joined the organisation last year. We often find people originally looked at us for innovative products but having experienced our pre-project design, project engineering and ongoing service and maintenance they have come to us with follow on opportunities. Therefore, our USP is that “special blend”, between highly-efficient products and a real focus on support by people who know their stuff and care about our customers.
5. Focusing on recent company news and thought leadership activities – can you talk us through the new PowerWAVE 9250DPA modular UPS?
Yes, the new PowerWAVE 9250DPA is something we are really proud to be bringing to market. Not only does it provide a scalable solution from 50kW to 1.5MW, with 250 kW N+1 capability in a rack, but it does it with market leading efficiency; 97.6% at the module and 97.4% at system level. And these are independently certified figures, which I think is important when we consider how self-certification has been undermined through problems in building construction and automotive manufacturing.
Smart deployment of modules via our XTRA VFI mode increases efficiency even further by balancing the number of modules which are online and which are on hot standby. I was recently with some customers at a factory test with a fully instrumented UPS in this mode and they were surprised and impressed by how effective this could be.
Combined with the compact footprint, ease of access design and modular functionality, we can see the PowerWAVE 9250DPA being a great addition to our portfolio.
6. And modular UPS systems are going to become increasingly popular over time?
I think that the market has really woken up to the benefits of the modular approach – the scalability, faster time to repair, increasing availability and the cost effectiveness, especially where an N+1 or 2(N+1) topology is demanded. Industry reports confirm the trend we’ve seen and new innovations such as smart deployment / XTRA VFI will only help to drive growth.
7. Issues-wise, can you explain the relative pros and cons of on-line, off-line and line interactive UPS design architectures?
Off-line and line interactive UPSs work by monitoring the mains and providing protection only when the mains deviates outside set parameters. By design, this type of technology will therefore not provide continuous uninterrupted protection for all mains disturbances and is priced accordingly. Typically, off-line and line interactive UPSs are used for very small applications where total protection is less important. For the majority of UPS applications, on-line UPSs should be used due to the greatly improved resilience that they provide by producing a smooth, conditioned and uninterrupted source for the supported critical equipment.
8. Sticking with architecture, can you talk us through decentralised parallel architecture and why it’s important for ‘true modularity’?
With the growing trend for modular systems, I suppose it is not surprising that there are some solutions on the market that profess to be a lower cost, “halfway house” between standalone systems and truly modular systems with fully a decentralised parallel architecture (DPA). In these “semi-modular” systems there remain some shared components, which results in the same disadvantages as standalone systems. To clarify, KUP is not against standalone systems, we have some great standalone products of our own and where an N+N topology is needed or the load is constant and unlikely to grow significantly, they can be a cost effective solution.
However the halfway house, semi-modular products have muddied the water somewhat so we would always recommend that engineers check what is common and what is shared in a UPS rather than assuming a reference to “modular” means all critical components are unique per module – as would be the case in a truly modular product.
9. Moving on to the KUP product portfolio, what can you tell us about the single phase UPS systems?
Small, single-phase UPS systems are typically found in offices and IT rooms. Such operations are usually limited for installation space and/or budget, yet for their users, they are just as mission critical as a larger-scale data centre solution. We offer compact, flexible rack-mounted or tower solutions that are reliable and resilient, while providing single-phase power protection from 1 kVA upwards.
Yet we also provide critical power protection solutions for larger-scale applications needing a protected single-phase supply; our PowerWAVE 3000/TP delivers up to 80 kVA for mid-sized server rooms, industrial processes, networks and telecommunications systems. It’s simple to install, run and maintain.
10. And the three phase UPS systems?
The key issues for IT and telecommunications users across financial services, education, healthcare and other industry verticals are high availability and energy efficiency, along with cost-effectiveness, flexibility, scalability and compact sizing.
Our modular three-phase solutions, which include vertical and horizontal scaling possibilities, range from 10 kW to 3 MW. Up to 99.9999% (six nines) availability is possible, along with up to 97% energy efficiency, or over 99% on systems where Eco-mode can be used. With our Xtra VFI smart module deployment mode, high efficiency is even maintained at low loads below 25%. Hot-swappable modules minimise system downtime, while scalability and efficiency manage total cost of ownership and real-estate demands.
11. How do you see Lithium-ion solutions impacting on the data centre over time?
We are still in the early stages of Li-ion adoption. Pricing – originally a significant barrier to Li-ion uptake in data centres and other applications – is still decreasing, although at a lower rate than it used to. Manufacturers have come a long way in addressing safety fears through highly segregated cell designs and mandatory advanced monitoring. There are also various projects to improve Li-ion recyclability, which is always a user concern today. Additionally, users appreciate that Li-ion, even with its mandatory monitoring system, uses a smaller footprint than lead-acid.
Because of these factors, adoption of Li-ion is increasing. However, the long-established VRLA solutions will remain popular, especially as we see a potential for increasing their life by 30% through using battery management technology.
12. KUP also offers generators for the data centre market?
Yes, we do; we offer standalone generators, or complete UPS/generator sets, with capacities from 5 kVA to 4.4 MW. Single-phase and three-phase solutions are available. Overall, our generators offer high reliability, low cost and durability, and are available with a choice of engine manufacturers, including Kohler’s own engine range.
We also offer ancillary products, with a choice of automatic changeover panels. Our control panels range from entry level types to models that can run the entire standby generator. All generator products are backed with technical support, commissioning and maintenance services.
13. Services is a big area for the company – starting with maintenance?
Actually, our range of service offerings begins before our solution is supplied, or maintenance is required. We can perform a free site survey to establish and advise on an operation’s requirements for a conditioned power solution, including ongoing maintenance and remote monitoring needs. Then, we can perform installation and commissioning services.
Once up and running, we offer maintenance contracts to cover UPSs, generators and batteries. These are tailored to individual requirements, but generally cover emergency callouts and scheduled maintenance to agreed service levels. We also hire out UPSs and generators. Our onsite presence is complemented by our remote monitoring services.
Overall, we use our technical expertise, and depth of power protection industry knowledge, to ensure that customers cost-effectively achieve the level of security and protection they need.
14. And there’s a range of services around batteries?
Certainly. We supply and fit batteries of all types into all models of UPS and secure power systems. KUP also offers a battery replacement programme for a wide range of battery supported products. With regular UPS battery maintenance, we ensure that weak battery blocks are replaced before they put the load at risk by failing during operations.
Predictive maintenance is optimised through regular battery impedance testing; a significant change in a battery’s impedance indicates that it’s approaching end of life and should be replaced. We also perform load bank testing, as it’s the only way to fully prove the integrity of the complete system. Then, at end of life, we remove batteries in line with the Hazardous Waste Regulations and place them into the recycling chain. I find it very encouraging to know that 98% of a KUP UPS VRLA battery is recycled, either into more batteries, chemicals for use in other manufacturing processes, or as polymer to make other plastic items.
These are in addition to the battery monitoring and management services we offer, discussed elsewhere.
15. And monitoring solutions?
In fact, we provide remote management as well as monitoring solutions for UPS batteries. Through PowerNSURE, our integrated network-based battery monitoring and management service, we can monitor the internal resistance, temperature and voltage of every individual battery. However, PowerNSURE also manages the battery condition, and runs an equalisation process, by regulating the charging voltage. This can help increase battery life by up to 30%.
The system also regularly generates reports displaying key parameters in graphical form, providing clear warning when remedial action is needed.
16. What advice would you give to end users who are looking to upgrade and/or replace their existing UPS systems – what should they look out for?
UPSs, when looked after, typically last 10 – 15yrs. At this point it is advisable to look at replacements. A modern UPS will often reduce the running costs due to increased efficiency and when looking at replacing it is advisable to consider the whole installation to assess if it is still valid. For example, is the resilience high enough for the criticality of the load? It is often relatively simple to improve the resilience and therefore the availability of the UPS system with a modern design. Another consideration should be the current and future load on the UPS, as this may have changed significantly from the original design and when evaluated may well change the requirements. Users should be advised against replacing the UPS with like for like without an assessment.
17. Finally, we must ask, are you a glass half empty or half full when it comes to power availability in the long term?
Power availability I think it is accepted that the UK’s energy capacity will need to increase year-on-year until around 2050, with most analysis putting that increase at around 30 per cent from today’s capacity levels. On top of growth due to electric vehicles, connected devices will also contribute to that increase. It might even be that future direct and indirect power requirements are being underestimated due to the potential volume of data that will need to be transmitted and the number of physical devices.
When you look at the specifics, there are a lot of unknowns relating to what the energy mix will look like and how fast it will evolve. Production from fossil fuels and even nuclear is in decline, and the downward cost-pressures on renewables and related energy storage solutions is undoubtedly going to mean we see their position grow. Which is a good thing.
One thing is certain, in 20 years, the market won’t look like it does today, with customers playing an ever-bigger role in determining the energy mix. The impact of shifting attitudes towards fossil fuels was demonstrated recently by Norway when residents successfully lobbied the country’s politicians to not pursue new exploration fields in the north of the country.
Overall, businesses in the energy sector, and there will be lots of new players, will need to provide services cleanly, cheaply and efficiently – and the only way to do that is to leverage new technologies across production and distribution. So, I think I am sitting on the fence a little. It can be done but only if everyone accepts that change is necessary and embraces the opportunities that are already or will become available.